Is there any way to get the actual data instead of downladURL from firebase storage ? In my case, i store string(some amount of html) to the storage and i want to get the actual data when its needed.
But i can't figure out how to do it. According to firebase documentation i can get the download able url but can't fetch the actual data.
Here is the function to fetch data from the storage(In the test case i can get the url properly, but i need the actual data)
// Create a reference to the file we want to download
var starsRef = storageRef.child('images/stars.jpg');
// Get the download URL
starsRef.getDownloadURL().then(function(url) {
// Insert url into an <img> tag to "download"
})
Thanks
Update 17.02.2020
I solve my problem, my mistake! Its possible to download file from storage using ajax request which is mentioned in the docs. Here is the simple function i define which return a promise and after resolving you can get the actual file/data.
async updateToStorage(pathArray, dataToUpload) {
let address = pathArray.join("/");
// Create a storage ref
let storageRef = firebase.storage().ref(address);
// Upload file as string format, known as firebase task
let uploadPromise = await storageRef.putString(dataToUpload);
let url = await uploadPromise.ref.getDownloadURL();
const res = await fetch(url);
const content = await res.text();
return content;
}
const avatar = await updateToStorage(['storage', 'uid', 'avatarUrl'], avatar.png);
//avatar will be the actual image after download.
The Cloud Storage for Firebase APIs for JavaScript running in web browsers actually don't provide a way to download the raw data of a file. This is different on Android and iOS. Notice that StorageReference doesn't have any direct accessors for data, unlike the Android and iOS equivalents. I don't know why this is. Consider it a feature request that you can file with Firebase support.
You will probably need to set up some sort of API endpoint that your code can call, routed through the web server that serves your web site, or through something else that supports CORS so that you can make an request from the browser that crosses web domains without security issues.
Related
Over the years on snapchat I have saved lots of photos that I would like to retrieve now, The problem is they do not make it easy to export, but luckily if you go online you can request all the data (thats great)
I can see all my photos download link and using the local HTML file if I click download it starts downloading.
Here's where the tricky part is, I have around 15,000 downloads I need to do and manually clicking each individual one will take ages, I've tried extracting all of the links through the download button and this creates lots of Urls (Great) but the problem is, if you past the url into the browser then ("Error: HTTP method GET is not supported by this URL") appears.
I've tried a multitude of different chrome extensions and none of them show the actually download, just the HTML which is on the left-hand side.
The download button is a clickable link that just starts the download in the tab. It belongs under Href A
I'm trying to figure out what the best way of bulk downloading each of these individual files is.
So, I just watched their code by downloading my own memories. They use a custom JavaScript function to download your data (a POST request with ID's in the body).
You can replicate this request, but you can also just use their method.
Open your console and use downloadMemories(<url>)
Or if you don't have the urls you can retrieve them yourself:
var links = document.getElementsByTagName("table")[0].getElementsByTagName("a");
eval(links[0].href);
UPDATE
I made a script for this:
https://github.com/ToTheMax/Snapchat-All-Memories-Downloader
Using the .json file you can download them one by one with python:
req = requests.post(url, allow_redirects=True)
response = req.text
file = requests.get(response)
Then get the correct extension and the date:
day = date.split(" ")[0]
time = date.split(" ")[1].replace(':', '-')
filename = f'memories/{day}_{time}.mp4' if type == 'VIDEO' else f'memories/{day}_{time}.jpg'
And then write it to file:
with open(filename, 'wb') as f:
f.write(file.content)
I've made a bot to download all memories.
You can download it here
It doesn't require any additional installation, just place the memories_history.json file in the same directory and run it. It skips the files that have already been downloaded.
Short answer
Download a desktop application that automates this process.
Visit downloadmysnapchatmemories.com to download the app. You can watch this tutorial guiding you through the entire process.
In short, the app reads the memories_history.json file provided by Snapchat and downloads each of the memories to your computer.
App source code
Long answer (How the app described above works)
We can iterate over each of the memories within the memories_history.json file found in your data download from Snapchat.
For each memory, we make a POST request to the URL stored as the memories Download Link. The response will be a URL to the file itself.
Then, we can make a GET request to the returned URL to retrieve the file.
Example
Here is a simplified example of fetching and downloading a single memory using NodeJS:
Let's say we have the following memory stored in fakeMemory.json:
{
"Date": "2022-01-26 12:00:00 UTC",
"Media Type": "Image",
"Download Link": "https://app.snapchat.com/..."
}
We can do the following:
// import required libraries
const fetch = require('node-fetch'); // Needed for making fetch requests
const fs = require('fs'); // Needed for writing to filesystem
const memory = JSON.parse(fs.readFileSync('fakeMemory.json'));
const response = await fetch(memory['Download Link'], { method: 'POST' });
const url = await response.text(); // returns URL to file
// We can now use the `url` to download the file.
const download = await fetch(url, { method: 'GET' });
const fileName = 'memory.jpg'; // file name we want this saved as
const fileData = download.body; // contents of the file
// Write the contents of the file to this computer using Node's file system
const fileStream = fs.createWriteStream(fileName);
fileData.pipe(fileStream);
fileStream.on('finish', () => {
console.log('memory successfully downloaded as memory.jpg');
});
I want to write a fixture to simulate the export file and make sure a file is downloaded from browser actions.
any example?
NA
There's not a fancy way check if the download has finished, TestCafe is somewhat limited in its ability to control the download ability in the browser.
import fs from 'fs';
const fileName = 'junk.txt';
const downloadLocation = 'C:\\Wherever\\Downloads\\';
const fileDLUrlBase = 'https://example.com/downloads/';
fixture('download test fixture');
test('download test', async t => {
await t.navigateTo(fileDLUrlBase + fileName);
await t.wait(30000);
// Wait 30 seconds
await t.expect(fs.fileExistsSync(downloadLocation + fileName));
});
You could convert that to a loop that checks, say, every 5 seconds for 60 seconds, if you wanted.
// Wait 15*1000 ms or less
async function waitForFile (path) {
for (let i = 0; i < 15; i++) {
if (fs.existsSync(path))
return true;
await t.wait(1000);
}
return fs.existsSync(path);
}
await t.expect(await waitForFile(/*path*/)).ok();
See also: Check the Downloaded File Name and Content
Since the question is vague i.e. we don't know whether OP's exported file is:
from a remote URL, or
or a data URI
I'm adding on my answer for the latter, as I recently had a similar question. Instead of a remote URL download, I wanted to test for download of a data URI, but I couldn't find an answer so I'm posting my answer in case anyone has the same question.
Here's the download button with a data URI:
<a
download="file.txt"
target="_black"
href="data:text/plain;,generated text data that will force download on click"
id="btn-download">
Download
</a>
And a snippet of my test (in TypeScript):
const DownloadButton = Selector("#btn-download");
// Simulate a file download
const fileName = await DownloadLink.getAttribute("download");
const filePath = `${downloadsFolder()}\\${fileName}`; // Used the downloads-folder package
await t.click(DownloadButton);
// Using Vladimir's answer to check for file download every x seconds
await t.expect(await waitForFile(filePath)).eql(true);
// We expect the contents of the input to match the downloaded file
await t.expect(JSON.parse(readFileSync(filePath, "utf8"))).eql(TestDocument2);
// Clean up
await unlinkSync(filePath); // Or you can use the afterEach hook to do cleanups
Point is, if your downloaded file is via an anchor href, you cannot use the navigateTo solution posted above for security reasons, and you will get the Not allowed to navigate top frame to data URL error.
In the last months a new security update for Google Chrome was published that practically removed the possibility to open base64 URIs in the browser directly with JavaScript.
this works for me
await t
.click(this.downloadPdfImage) // selector -- downloadPdfImage file name that get downloaded
.wait(1000)
await t.expect(fs.existsSync(this.downloadLocation + this.fileNamePdf)).ok(); //assertion to verify
In my Vue app I receive a PDF as a blob, and want to display it using the browser's PDF viewer.
I convert it to a file, and generate an object url:
const blobFile = new File([blob], `my-file-name.pdf`, { type: 'application/pdf' })
this.invoiceUrl = window.URL.createObjectURL(blobFile)
Then I display it by setting that URL as the data attribute of an object element.
<object
:data="invoiceUrl"
type="application/pdf"
width="100%"
style="height: 100vh;">
</object>
The browser then displays the PDF using the PDF viewer. However, in Chrome, the file name that I provide (here, my-file-name.pdf) is not used: I see a hash in the title bar of the PDF viewer, and when I download the file using either 'right click -> Save as...' or the viewer's controls, it saves the file with the blob's hash (cda675a6-10af-42f3-aa68-8795aa8c377d or similar).
The viewer and file name work as I'd hoped in Firefox; it's only Chrome in which the file name is not used.
Is there any way, using native Javascript (including ES6, but no 3rd party dependencies other than Vue), to set the filename for a blob / object element in Chrome?
[edit] If it helps, the response has the following relevant headers:
Content-Type: application/pdf; charset=utf-8
Transfer-Encoding: chunked
Content-Disposition: attachment; filename*=utf-8''Invoice%2016246.pdf;
Content-Description: File Transfer
Content-Encoding: gzip
Chrome's extension seems to rely on the resource name set in the URI, i.e the file.ext in protocol://domain/path/file.ext.
So if your original URI contains that filename, the easiest might be to simply make your <object>'s data to the URI you fetched the pdf from directly, instead of going the Blob's way.
Now, there are cases it can't be done, and for these, there is a convoluted way, which might not work in future versions of Chrome, and probably not in other browsers, requiring to set up a Service Worker.
As we first said, Chrome parses the URI in search of a filename, so what we have to do, is to have an URI, with this filename, pointing to our blob:// URI.
To do so, we can use the Cache API, store our File as Request in there using our URL, and then retrieve that File from the Cache in the ServiceWorker.
Or in code,
From the main page
// register our ServiceWorker
navigator.serviceWorker.register('/sw.js')
.then(...
...
async function displayRenamedPDF(file, filename) {
// we use an hard-coded fake path
// to not interfere with legit requests
const reg_path = "/name-forcer/";
const url = reg_path + filename;
// store our File in the Cache
const store = await caches.open( "name-forcer" );
await store.put( url, new Response( file ) );
const frame = document.createElement( "iframe" );
frame.width = 400
frame.height = 500;
document.body.append( frame );
// makes the request to the File we just cached
frame.src = url;
// not needed anymore
frame.onload = (evt) => store.delete( url );
}
In the ServiceWorker sw.js
self.addEventListener('fetch', (event) => {
event.respondWith( (async () => {
const store = await caches.open("name-forcer");
const req = event.request;
const cached = await store.match( req );
return cached || fetch( req );
})() );
});
Live example (source)
Edit: This actually doesn't work in Chrome...
While it does set correctly the filename in the dialog, they seem to be unable to retrieve the file when saving it to the disk...
They don't seem to perform a Network request (and thus our SW isn't catching anything), and I don't really know where to look now.
Still this may be a good ground for future work on this.
And an other solution, I didn't took the time to check by myself, would be to run your own pdf viewer.
Mozilla has made its js based plugin pdf.js available, so from there we should be able to set the filename (even though once again I didn't dug there yet).
And as final note, Firefox is able to use the name property of a File Object a blobURI points to.
So even though it's not what OP asked for, in FF all it requires is
const file = new File([blob], filename);
const url = URL.createObjectURL(file);
object.data = url;
In Chrome, the filename is derived from the URL, so as long as you are using a blob URL, the short answer is "No, you cannot set the filename of a PDF object displayed in Chrome." You have no control over the UUID assigned to the blob URL and no way to override that as the name of the page using the object element. It is possible that inside the PDF a title is specified, and that will appear in the PDF viewer as the document name, but you still get the hash name when downloading.
This appears to be a security precaution, but I cannot say for sure.
Of course, if you have control over the URL, you can easily set the PDF filename by changing the URL.
I believe Kaiido's answer expresses, briefly, the best solution here:
"if your original URI contains that filename, the easiest might be to simply make your object's data to the URI you fetched the pdf from directly"
Especially for those coming from this similar question, it would have helped me to have more description of a specific implementation (working for pdfs) that allows the best user experience, especially when serving files that are generated on the fly.
The trick here is using a two-step process that perfectly mimics a normal link or button click. The client must (step 1) request the file be generated and stored server-side long enough for the client to (step 2) request the file itself. This requires you have some mechanism supporting unique identification of the file on disk or in a cache.
Without this process, the user will just see a blank tab while file-generation is in-progress and if it fails, then they'll just get the browser's ERR_TIMED_OUT page. Even if it succeeds, they'll have a hash in the title bar of the PDF viewer tab, and the save dialog will have the same hash as the suggested filename.
Here's the play-by-play to do better:
You can use an anchor tag or a button for the "download" or "view in browser" elements
Step 1 of 2 on the client: that element's click event can make a request for the file to be generated only (not transmitted).
Step 1 of 2 on the server: generate the file and hold on to it. Return only the filename to the client.
Step 2 of 2 on the client:
If viewing the file in the browser, use the filename returned from the generate request to then invoke window.open('view_file/<filename>?fileId=1'). That is the only way to indirectly control the name of the file as shown in the tab title and in any subsequent save dialog.
If downloading, just invoke window.open('download_file?fileId=1').
Step 2 of 2 on the server:
view_file(filename, fileId) handler just needs to serve the file using the fileId and ignore the filename parameter. In .NET, you can use a FileContentResult like File(bytes, contentType);
download_file(fileId) must set the filename via the Content-Disposition header as shown here. In .NET, that's return File(bytes, contentType, desiredFilename);
client-side download example:
download_link_clicked() {
// show spinner
ajaxGet(generate_file_url,
{},
(response) => {
// success!
// the server-side is responsible for setting the name
// of the file when it is being downloaded
window.open('download_file?fileId=1', "_blank");
// hide spinner
},
() => { // failure
// hide spinner
// proglem, notify pattern
},
null
);
client-side view example:
view_link_clicked() {
// show spinner
ajaxGet(generate_file_url,
{},
(response) => {
// success!
let filename = response.filename;
// simplest, reliable method I know of for controlling
// the filename of the PDF when viewed in the browser
window.open('view_file/'+filename+'?fileId=1')
// hide spinner
},
() => { // failure
// hide spinner
// proglem, notify pattern
},
null
);
I'm using the library pdf-lib, you can click here to learn more about the library.
I solved part of this problem by using api Document.setTitle("Some title text you want"),
Browser displayed my title correctly, but when click the download button, file name is still previous UUID. Perhaps there is other api in the library that allows you to modify download file name.
I just started working with the Microsoft Azure Storage SDK for NodeJS (https://github.com/Azure/azure-storage-node) and already successfully uploaded my first pdf files to the cloud storage.
However, now I started looking at the documentation, in order to download my files as a node_buffer (so I dont have to use fs.createWriteStream), however the documentation is not giving any examples of how this works. The only thing they are writing is "There are also several ways to download files. For example, getFileToStream downloads the file to a stream:", but then they only show one example, which is using the fs.createWriteStream, which I dont want to use.
I was also not able to find anything on Google that really helped me, so I was wondering if anybody has experience with doing this and could share a code sample with me?
The getFileToStream function need a writable stream as param. If you want all the data wrote to a Buffer instead of a file, you just need to create a custom writable stream.
const { Writable } = require('stream');
let bufferArray = [];
const myWriteStream = new Writable({
write(chunk, encoding, callback) {
bufferArray.push(...chunk)
callback();
}
});
myWriteStream.on('finish', function () {
// all the data is stored inside this dataBuffer
let dataBuffer = Buffer.from(bufferArray);
})
then pass myWriteStream to getFileToStream function
fileService.getFileToStream('taskshare', 'taskdirectory', 'taskfile', myWriteStream, function(error, result, response) {
if (!error) {
// file retrieved
}
});
The same url will return different image(random), and I need to get the response's header(also will be different in each times), so I can not fetch twice.
I try to use the blob, but get a warn that said 'blob' is undefined, the code like this:
let response = await fetch(URLs.host + URLs.imageCode);
let key = response.headers.get('key');
console.log(response.blob); // this will print 'undefined'
let blob = await response.blob();
this.setState({source: URL.createObjectURL(blob)});
...
<Image source={{uri: this.state.source}} />
So how can I get the header when load the image?
You can't use fetch to get a blob in React-Native per this because fetch is a native http call that stores the blob on the native level and doesn't provide a reference to the actual memory to the javaScript blob object in res.blob()
Best option until someone writes a bridge for this is to use base64 encoded images which is just a string so you can get it with res.text()
Well, I found now we can not get the context when load an image by use {{uri: xxx}} in react-native, so my solution is talk to the server develper and let they design the API in an other way.