I want to write a fixture to simulate the export file and make sure a file is downloaded from browser actions.
any example?
NA
There's not a fancy way check if the download has finished, TestCafe is somewhat limited in its ability to control the download ability in the browser.
import fs from 'fs';
const fileName = 'junk.txt';
const downloadLocation = 'C:\\Wherever\\Downloads\\';
const fileDLUrlBase = 'https://example.com/downloads/';
fixture('download test fixture');
test('download test', async t => {
await t.navigateTo(fileDLUrlBase + fileName);
await t.wait(30000);
// Wait 30 seconds
await t.expect(fs.fileExistsSync(downloadLocation + fileName));
});
You could convert that to a loop that checks, say, every 5 seconds for 60 seconds, if you wanted.
// Wait 15*1000 ms or less
async function waitForFile (path) {
for (let i = 0; i < 15; i++) {
if (fs.existsSync(path))
return true;
await t.wait(1000);
}
return fs.existsSync(path);
}
await t.expect(await waitForFile(/*path*/)).ok();
See also: Check the Downloaded File Name and Content
Since the question is vague i.e. we don't know whether OP's exported file is:
from a remote URL, or
or a data URI
I'm adding on my answer for the latter, as I recently had a similar question. Instead of a remote URL download, I wanted to test for download of a data URI, but I couldn't find an answer so I'm posting my answer in case anyone has the same question.
Here's the download button with a data URI:
<a
download="file.txt"
target="_black"
href="data:text/plain;,generated text data that will force download on click"
id="btn-download">
Download
</a>
And a snippet of my test (in TypeScript):
const DownloadButton = Selector("#btn-download");
// Simulate a file download
const fileName = await DownloadLink.getAttribute("download");
const filePath = `${downloadsFolder()}\\${fileName}`; // Used the downloads-folder package
await t.click(DownloadButton);
// Using Vladimir's answer to check for file download every x seconds
await t.expect(await waitForFile(filePath)).eql(true);
// We expect the contents of the input to match the downloaded file
await t.expect(JSON.parse(readFileSync(filePath, "utf8"))).eql(TestDocument2);
// Clean up
await unlinkSync(filePath); // Or you can use the afterEach hook to do cleanups
Point is, if your downloaded file is via an anchor href, you cannot use the navigateTo solution posted above for security reasons, and you will get the Not allowed to navigate top frame to data URL error.
In the last months a new security update for Google Chrome was published that practically removed the possibility to open base64 URIs in the browser directly with JavaScript.
this works for me
await t
.click(this.downloadPdfImage) // selector -- downloadPdfImage file name that get downloaded
.wait(1000)
await t.expect(fs.existsSync(this.downloadLocation + this.fileNamePdf)).ok(); //assertion to verify
Related
Below code does save a file to the user's disk:
function handleSaveImg(event){
const image = canvas.toDataURL();
const saveImg = document.createElement('a');
saveImg.href = image;
saveImg.download= saveAs;
saveImg.click();
}
if(saveMode){
saveMode.addEventListener("click", handleSaveImg);
}
It uses an <a> tag to save some data (in my case, an image exported from a <canvas>).
But this saves directly to the disk, with no prompt asking where to save the file, nor under which name.
I want to force the displaying of the "Save as" dialog box, so that the user has to choose where they'll save that file.
Is there any way?
Yes, and it's called showSaveFilePicker().
This is part of the File System Access API, which is still a draft, but is already exposed in all Chromium browsers.
This API is quite powerful and will give your code direct access to the user's disk, so it is only available in secure contexts.
Once the Promise returned by this method resolve, you'll get access to an handle where you'll be able to access a WritableStream to which you'll be able to write your data.
It's a bit more complicated than download, but it's also a lot more powerful, since you can write as stream, not needing to have the whole data in memory (think recording a video).
In your case this would give
async function handleSaveImg(event){
const image = await new Promise( (res) => canvas.toBlob( res ) );
if( window.showSaveFilePicker ) {
const handle = await showSaveFilePicker();
const writable = await handle.createWritable();
await writable.write( image );
writable.close();
}
else {
const saveImg = document.createElement( "a" );
saveImg.href = URL.createObjectURL( image );
saveImg.download= "image.png";
saveImg.click();
setTimeout(() => URL.revokeObjectURL( saveImg.href ), 60000 );
}
}
Here is a live demo, (and the code).
No.
You can specify a filename by assigning a string to the download property.
There is no way to persuade a browser to show a SaveAs dialog when it is configured to save downloads to a default folder without prompting. That is entirely under the control of the user.
Over the years on snapchat I have saved lots of photos that I would like to retrieve now, The problem is they do not make it easy to export, but luckily if you go online you can request all the data (thats great)
I can see all my photos download link and using the local HTML file if I click download it starts downloading.
Here's where the tricky part is, I have around 15,000 downloads I need to do and manually clicking each individual one will take ages, I've tried extracting all of the links through the download button and this creates lots of Urls (Great) but the problem is, if you past the url into the browser then ("Error: HTTP method GET is not supported by this URL") appears.
I've tried a multitude of different chrome extensions and none of them show the actually download, just the HTML which is on the left-hand side.
The download button is a clickable link that just starts the download in the tab. It belongs under Href A
I'm trying to figure out what the best way of bulk downloading each of these individual files is.
So, I just watched their code by downloading my own memories. They use a custom JavaScript function to download your data (a POST request with ID's in the body).
You can replicate this request, but you can also just use their method.
Open your console and use downloadMemories(<url>)
Or if you don't have the urls you can retrieve them yourself:
var links = document.getElementsByTagName("table")[0].getElementsByTagName("a");
eval(links[0].href);
UPDATE
I made a script for this:
https://github.com/ToTheMax/Snapchat-All-Memories-Downloader
Using the .json file you can download them one by one with python:
req = requests.post(url, allow_redirects=True)
response = req.text
file = requests.get(response)
Then get the correct extension and the date:
day = date.split(" ")[0]
time = date.split(" ")[1].replace(':', '-')
filename = f'memories/{day}_{time}.mp4' if type == 'VIDEO' else f'memories/{day}_{time}.jpg'
And then write it to file:
with open(filename, 'wb') as f:
f.write(file.content)
I've made a bot to download all memories.
You can download it here
It doesn't require any additional installation, just place the memories_history.json file in the same directory and run it. It skips the files that have already been downloaded.
Short answer
Download a desktop application that automates this process.
Visit downloadmysnapchatmemories.com to download the app. You can watch this tutorial guiding you through the entire process.
In short, the app reads the memories_history.json file provided by Snapchat and downloads each of the memories to your computer.
App source code
Long answer (How the app described above works)
We can iterate over each of the memories within the memories_history.json file found in your data download from Snapchat.
For each memory, we make a POST request to the URL stored as the memories Download Link. The response will be a URL to the file itself.
Then, we can make a GET request to the returned URL to retrieve the file.
Example
Here is a simplified example of fetching and downloading a single memory using NodeJS:
Let's say we have the following memory stored in fakeMemory.json:
{
"Date": "2022-01-26 12:00:00 UTC",
"Media Type": "Image",
"Download Link": "https://app.snapchat.com/..."
}
We can do the following:
// import required libraries
const fetch = require('node-fetch'); // Needed for making fetch requests
const fs = require('fs'); // Needed for writing to filesystem
const memory = JSON.parse(fs.readFileSync('fakeMemory.json'));
const response = await fetch(memory['Download Link'], { method: 'POST' });
const url = await response.text(); // returns URL to file
// We can now use the `url` to download the file.
const download = await fetch(url, { method: 'GET' });
const fileName = 'memory.jpg'; // file name we want this saved as
const fileData = download.body; // contents of the file
// Write the contents of the file to this computer using Node's file system
const fileStream = fs.createWriteStream(fileName);
fileData.pipe(fileStream);
fileStream.on('finish', () => {
console.log('memory successfully downloaded as memory.jpg');
});
I have a local JSON file which I intent to read/write from a NodeJS electron app. I am not sure, but I believe that instead of using readFile() and writeFile(), I should get a FileHandle to avoid multiple open and close actions.
So I've tried to grab a FileHandle from fs.promises.open(), but the problem seems to be that I am unable to get a FileHandle from an existing file without truncate it and clear it to 0.
const { resolve } = require('path');
const fsPromises = require('fs').promises;
function init() {
// Save table name
this.path = resolve(__dirname, '..', 'data', `test.json`);
// Create/Open the json file
fsPromises
.open(this.path, 'wx+')
.then(fileHandle => {
// Grab file handle if the file don't exists
// because of the flag 'wx+'
this.fh = fileHandle;
})
.catch(err => {
if (err.code === 'EEXIST') {
// File exists
}
});
}
Am I doing something wrong? Are there better ways to do it?
Links:
https://nodejs.org/api/fs.html#fs_fspromises_open_path_flags_mode
https://nodejs.org/api/fs.html#fs_file_system_flags
Because JSON is a text format that has to be read or written all at once and can't be easily modified or added onto in place, you're going to have to read the whole file or write the whole file at once.
So, your simplest option will be to just use fs.promises.readFile() and fs.promises.writeFile() and let the library open the file, read/write it and close the file. Opening and closing a file in a modern OS takes advantage of disk caching so if you're reopening a file you just previously opened not long ago, it's not going to be a slow operation. Further, since nodejs performs these operations in secondary threads in libuv, it doesn't block the main thread of nodejs either so its generally not a performance issue for your server.
If you really wanted to open the file once and hold it open, you would open it for reading and writing using the r+ flag as in:
const fileHandle = await fsPromises.open(this.path, 'r+');
Reading the whole file would be simple as the new fileHandle object has a .readFile() method.
const text = await fileHandle.readFile({encoding 'utf8'});
For writing the whole file from an open filehandle, you would have to truncate the file, then write your bytes, then flush the write buffer to ensure the last bit of the data got to the disk and isn't sitting in a buffer.
await fileHandle.truncate(0); // clear previous contents
let {bytesWritten} = await fileHandle.write(mybuffer, 0, someLength, 0); // write new data
assert(bytesWritten === someLength);
await fileHandle.sync(); // flush buffering to disk
Is there any way to get the actual data instead of downladURL from firebase storage ? In my case, i store string(some amount of html) to the storage and i want to get the actual data when its needed.
But i can't figure out how to do it. According to firebase documentation i can get the download able url but can't fetch the actual data.
Here is the function to fetch data from the storage(In the test case i can get the url properly, but i need the actual data)
// Create a reference to the file we want to download
var starsRef = storageRef.child('images/stars.jpg');
// Get the download URL
starsRef.getDownloadURL().then(function(url) {
// Insert url into an <img> tag to "download"
})
Thanks
Update 17.02.2020
I solve my problem, my mistake! Its possible to download file from storage using ajax request which is mentioned in the docs. Here is the simple function i define which return a promise and after resolving you can get the actual file/data.
async updateToStorage(pathArray, dataToUpload) {
let address = pathArray.join("/");
// Create a storage ref
let storageRef = firebase.storage().ref(address);
// Upload file as string format, known as firebase task
let uploadPromise = await storageRef.putString(dataToUpload);
let url = await uploadPromise.ref.getDownloadURL();
const res = await fetch(url);
const content = await res.text();
return content;
}
const avatar = await updateToStorage(['storage', 'uid', 'avatarUrl'], avatar.png);
//avatar will be the actual image after download.
The Cloud Storage for Firebase APIs for JavaScript running in web browsers actually don't provide a way to download the raw data of a file. This is different on Android and iOS. Notice that StorageReference doesn't have any direct accessors for data, unlike the Android and iOS equivalents. I don't know why this is. Consider it a feature request that you can file with Firebase support.
You will probably need to set up some sort of API endpoint that your code can call, routed through the web server that serves your web site, or through something else that supports CORS so that you can make an request from the browser that crosses web domains without security issues.
Assuming there's a server storing multiple files (not necessarily text documents):
http://<server>/<path>/file0001.txt ... http://<server>/<path>/file9999.txt
If user was to download all of those files as one, how would I do it in javascript?
Normally user would have to download 9999 files and join them on his drive.
How can I prompt a download of a file and stream the data of multiple files while javascript gets them, just like it's a stream of one, big file.
I imagine it would be something like this (excuse me for lack of javascript, just trying to explain):
With (download prompt of 'onefile.txt') as connection:
While connection is open:
For file in file_list:
get file
return file.contents
connection close
Downloading each file and storing it in memory until the last one is retrieved is not a good idea, since overall size of that file can be quite big.
I'm wondering if that's even possible. I can write it in python, but that's another story. I wanted to make it a javascript function on a website.
I'm surprised javascript can't just create a "virtual localhost connection" where it uses some generator to "yield" the contents of each file...
Well, if you use a service worker then you can manipulate the response and give it a readableStream which you can "yield" the content of each file...
This is what the streamSaver dose internally but takes away all hassle...
I will show you an example using es6 and StreamSaver.js
It's not tested it's just a ruffly idea.
This will consume very little memory, but it's limited to only Blink ATM if you wanna use StreamSaver
let download = Promise.coroutine(function* (files) {
const fileStream = streamSaver.createWriteStream('onefile.txt')
const writeStream = fileStream.getWriter()
// Later you will be able to just simply do
// yield res.body.pipeTo(fileStream) instead of pumping
for (let file of files) {
let res = yield fetch(file)
let reader = res.body.getReader()
let pump = () => reader.read()
.then(({ value, done }) => !done &&
// Write one chunk, then get the next one
writeStream.write(value).then(pump)
)
yield pump()
}
// Close the stream when you are done writing
writeStream.close()
}
download([
'http://<server>/<path>/file0001.txt',
'http://<server>/<path>/file9999.txt'
]).then(() => {
alert('all done')
})