Node webdav on strato hidrive. Uploaded binary files broken - javascript

I just want to be able to upload and download binary files to Strato Hidrive using node.js with webdav.
I tested uploading a jpg-image with the following code:
const createClient = require("webdav");
const fs = require("fs");
let client = createClient(
"https://myusername.webdav.hidrive.strato.com",
"myusername",
"mypassword"
);
let data = fs.readFileSync("./localfolder/logo.jpg", {encoding: "binary"});
client.putFileContents("/myfolder/logo.jpg", data, { "format": "binary", });
However when I check the uploaded file by downloading it through their web-client, it can't be opened and it seems to be corrupted.
Is there solution to this? Either by changing the code or by suggesting a free webdav space (other than Strato Hidrive) where it might be working?
Thanks a lot!

Related

Tampermonkey To open multiple javascript in href in new tab [duplicate]

Over the years on snapchat I have saved lots of photos that I would like to retrieve now, The problem is they do not make it easy to export, but luckily if you go online you can request all the data (thats great)
I can see all my photos download link and using the local HTML file if I click download it starts downloading.
Here's where the tricky part is, I have around 15,000 downloads I need to do and manually clicking each individual one will take ages, I've tried extracting all of the links through the download button and this creates lots of Urls (Great) but the problem is, if you past the url into the browser then ("Error: HTTP method GET is not supported by this URL") appears.
I've tried a multitude of different chrome extensions and none of them show the actually download, just the HTML which is on the left-hand side.
The download button is a clickable link that just starts the download in the tab. It belongs under Href A
I'm trying to figure out what the best way of bulk downloading each of these individual files is.
So, I just watched their code by downloading my own memories. They use a custom JavaScript function to download your data (a POST request with ID's in the body).
You can replicate this request, but you can also just use their method.
Open your console and use downloadMemories(<url>)
Or if you don't have the urls you can retrieve them yourself:
var links = document.getElementsByTagName("table")[0].getElementsByTagName("a");
eval(links[0].href);
UPDATE
I made a script for this:
https://github.com/ToTheMax/Snapchat-All-Memories-Downloader
Using the .json file you can download them one by one with python:
req = requests.post(url, allow_redirects=True)
response = req.text
file = requests.get(response)
Then get the correct extension and the date:
day = date.split(" ")[0]
time = date.split(" ")[1].replace(':', '-')
filename = f'memories/{day}_{time}.mp4' if type == 'VIDEO' else f'memories/{day}_{time}.jpg'
And then write it to file:
with open(filename, 'wb') as f:
f.write(file.content)
I've made a bot to download all memories.
You can download it here
It doesn't require any additional installation, just place the memories_history.json file in the same directory and run it. It skips the files that have already been downloaded.
Short answer
Download a desktop application that automates this process.
Visit downloadmysnapchatmemories.com to download the app. You can watch this tutorial guiding you through the entire process.
In short, the app reads the memories_history.json file provided by Snapchat and downloads each of the memories to your computer.
App source code
Long answer (How the app described above works)
We can iterate over each of the memories within the memories_history.json file found in your data download from Snapchat.
For each memory, we make a POST request to the URL stored as the memories Download Link. The response will be a URL to the file itself.
Then, we can make a GET request to the returned URL to retrieve the file.
Example
Here is a simplified example of fetching and downloading a single memory using NodeJS:
Let's say we have the following memory stored in fakeMemory.json:
{
"Date": "2022-01-26 12:00:00 UTC",
"Media Type": "Image",
"Download Link": "https://app.snapchat.com/..."
}
We can do the following:
// import required libraries
const fetch = require('node-fetch'); // Needed for making fetch requests
const fs = require('fs'); // Needed for writing to filesystem
const memory = JSON.parse(fs.readFileSync('fakeMemory.json'));
const response = await fetch(memory['Download Link'], { method: 'POST' });
const url = await response.text(); // returns URL to file
// We can now use the `url` to download the file.
const download = await fetch(url, { method: 'GET' });
const fileName = 'memory.jpg'; // file name we want this saved as
const fileData = download.body; // contents of the file
// Write the contents of the file to this computer using Node's file system
const fileStream = fs.createWriteStream(fileName);
fileData.pipe(fileStream);
fileStream.on('finish', () => {
console.log('memory successfully downloaded as memory.jpg');
});

Download a 'data:' image/file using puppeteer and node.js

I'm trying to download an image using node.js and puppeteer but I'm running into some issues. I'm using a webscraper to gather the links of the images from the site and then using the https/http package to download the image.
This works for the images using http and https sources but some images have links that look like this (the whole link is very long so I cut the rest):
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAw8AAADGCAYAAACU07w3AAAZuUlEQVR4Ae3df4yU930n8Pcslu1I1PU17okdO1cLrTD+g8rNcvRyti6247K5NG5S5HOl5hA2uZ7du6RJEGYPTFy1Nv4RUJy0cWVkeQ9ErqqriHNrR8niZuVIbntBS886rBZWCGHVsNEFRQ5BloPCzGn2B+yzZMLyaP........
I'm not sure how to handle these links or how to download the image. Any help would be appreciated.
You need to first decode the url from base64 using node.js Buffer.
// the content type image/png has to be removed first
const data = 'iVBORw0KGgoAAAANSUhEUgAAAw8AAADGCAYAAACU07w3AAAZuUlEQVR4Ae3df4yU930n8Pcslu1I1PU17okdO1cLrTD+g8rNcvRyti6247K5NG5S5HOl5hA2uZ7du6RJEGYPTFy1Nv4RUJy0cWVkeQ9ErqqriHNrR8niZuVIbntBS886rBZWCGHVsNEFRQ5BloPCzGn2B+yzZMLyaP';
const buffer = new Buffer(data);
const base64data = buff.toString('base64');
// after this you will get the url string and continue to fetch the image
These are the base64 encoded images (mostly used for icons and small images).
you can ignore it.
if(url.startsWith('data:')){
//base 64 image
} else{
// an image url
}
if you really want to mess with base64 I can give you a workaround.
import { parseDataURI } from 'dauria';
import mimeTypes from 'mime-types';
const fileContent = parseDataURI(file);
// you probably need an extension for that image.
let ext = mimeTypes.extension(fileContent.MIME) || 'bin';
fs.writeFile("a random file"+"."+ext, fileContent.buffer, function (err) {
console.log(err); // writes out file without error, but it's not a valid image
});

How to download a big file directly to the disk, without storing it in RAM of a server and browser?

I want to implement a big file downloading (approx. 10-1024 Mb) from the same server (without external cloud file storage, aka on-premises) where my app runs using Node.js and Express.js.
I figured out how to do that by converting the entire file into Blob, transferring it over the network, and then generating a download link with window.URL.createObjectURL(…) for the Blob. Such approach perfectly works as long as the files are small, otherwise it is impossible to keep the entire Blob in the RAM of neither server, nor client.
I've tried to implement several other approaches with File API and AJAX, but it looks like Chrome loads the entire file into RAM and only then dumps it to the disk. Again, it might be OK for small files, but for big ones it's not an option.
My last attempt was to send a basic Get-request:
const aTag = document.createElement("a");
aTag.href = `/downloadDocument?fileUUID=${fileName}`;
aTag.download = fileName;
aTag.click();
On the server-side:
app.mjs
app.get("/downloadDocument", async (req, res) => {
req.headers.range = "bytes=0";
const [urlPrefix, fileUUID] = req.url.split("/downloadDocument?fileUUID=");
const downloadResult = await StorageDriver.fileDownload(fileUUID, req, res);
});
StorageDriver.mjs
export const fileDownload = async function fileDownload(fileUUID, req, res) {
//e.g. C:\Users\User\Projects\POC\assets\wanted_file.pdf
const assetsPath = _resolveAbsoluteAssetsPath(fileUUID);
const options = {
dotfiles: "deny",
headers: {
"Content-Disposition": "form-data; name=\"files\"",
"Content-Type": "application/pdf",
"x-sent": true,
"x-timestamp": Date.now()
}
};
res.sendFile(assetsPath, options, (err) => {
if (err) {
console.log(err);
} else {
console.log("Sent");
}
});
};
When I click on the link, Chrome shows the file in Downloads but with a status Failed - No file. No file appears in the download destination.
My questions:
Why in case of sending a Get-request I get Failed - No file?
As far as I understand, res.sendFile can be a right choice for small files, but for big-ones it's better to use res.write, which can be split into chunks. Is it possible to use res.write with Get-request?
P.S. I've elaborated this question to make it more narrow and clear. Previously this question was focused on downloading a big file from Dropbox without storing it in the RAM, the answer can be found:
How to download a big file from Dropbox with Node.js?
Chrome can't show nice progress of downloading because the file is downloading on the background. And after downloading, a link to the file is created and "clicked" to force Chrome to show the dialog for the already downloaded file.
It can be done more easily. You need to create a GET request and let the browser download the file, without ajax.
app.get("/download", async (req, res, next) => {
const { fileName } = req.query;
const downloadResult = await StorageDriver.fileDownload(fileName);
res.set('Content-Type', 'application/pdf');
res.send(downloadResult.fileBinary);
});
function fileDownload(fileName) {
const a = document.createElement("a");
a.href = `/download?fileName=${fileName}`;
a.download = fileName;
a.click();
}

How can i make stream for zip archiving for long files

How can i make stream for zip archiving for long files? i making server that will upload video for anything size (1>gb) and it will split that video into parts , and return users that parts in zip archive. Or is great npm for that solutions with examples?
I have been using ADM ZIP (npm module). That works great, until using big files. So i need solutions in streams. I have try something like this:
var gzip = zlib.createGzip();
var fs = require('fs');
var inp = fs.createReadStream('input.txt');
var out = fs.createWriteStream('input.txt.gz');
inp.pipe(gzip).pipe(out);
But how i can add to this arhive more than one file, and how i can realize that with event ?
inp.on('data', function(data) {
// add data to zip and do other things like counthing percent processing
});

upload a file in Node.JS

I'm using multiparty for uploading a file; I'm so new to Node.JS and streaming; so my question is, is it right if I stream the file by the file.path which is returned in form.parse() like the way I'm doing in my attempted code? I mean this is absolute path and obviously is working on localhost because it is the absolute path of my current server which is localhost, but is it going to work when the user attempts to upload a file from their computer too?
form.parse(req, function (err, fields, files) {
var rs= fs.createReadStream(files.file[0].path);
var fileDate;
rs.on('readable', function () {
while (null !== (chunk = rs.read())) {
fileDate += chunk;
}
});
rs.on('end', function () {
console.log('importedData', fileDate);
});
});
Thanks, please let me know if you need more clarification!
That looks correct. By default, uploaded files are put in a temporary folder, if you're using Linux this will likely be /tmp, your users' files will end up in the same place when they upload their files through your front-end.

Categories