I'm making a really light weight web page that downloads a file and then displays content based on the file. There is no displayed content without the file, and I have no plans to scale up this webpage. I do not want to use async methods unless I absolutely have to.
How do you download a file in such a way that JavaScript will pause until the file is downloaded.
Just wrap your entire script in an async IIFE and await the single network request:
// immediately invoked async function expression:
(async () => {
// all of your sync code before the fetch request to get the file
const response = await fetch(url);
// handle response, parse data, etc.
// all of the rest of your sync code after the fetch request,
// which won't execute until after the fetch promise is
// resolved as the response (or rejected)
})();
Related
I have a backend that returns large files as chunks of 206 partial content. I had logic:
fetch(url, query)
.then((resp)=>resp.json())
.then(...)
but surpricingly the code fails because the full json object does not get returned by the server.
Is there some commonly used library to solve this (monkeypatch fetch) or should I write a service-worker or a proxy for this? Why does the default browser fetch not support fetching the full object?
It's not surprising that JSON parsing fails on a partial object. fetch is just going to fetch what you ask it to (well, it follows redirects), so if the query has options for requesting a partial response, that's what you'll get.
You can build up the JSON string until you have the full response to parse, something along these lines:
async function getAll() {
// Start with a blank string
let json = "";
do {
// Get this chunk
const resp = await fetch(url, /*...query for next chunk...*/);
if (!resp.ok && resp.status !== 206) {
throw new Error(`HTTP error ${resp.status}`);
}
// Read this chunk as text and append it to the JSON
json += await resp.text();
} while (resp.status === 206);
return JSON.parse(json);
}
Obviously, that's a rough sketch, you'll need to handle the Range header, etc.
That code also requests each chunk in series, waiting for the chunk to arrive before requesting the next. If you know in advance what the chunk size is, you might be able to parallelize it, though it'll be more complicated with chunks possibly arriving out of sequence, etc.
It also assumes the server doesn't throw you a curveball, like giving you a wider range than you asked for that overlaps what you've already received. If that's a genuine concern, you'll need to add logic for it.
This question already has answers here:
Why does .json() return a promise?
(6 answers)
Closed last year.
import fetch from "node-fetch"
const url = "https://jsonplaceholder.typicode.com/posts/1"
const getData = fetch(url)
getData.then((data)=>{console.log(data.json())})
Making a simple GET request using node-fetch. I am still getting a pending promise here while I am using .then block.
My thesis is that after the promise has resolved that is the data is returned from the server which will send the funciton definition inside the .then block to the microtask queue from where it will be taken to the call stack by the event loop.
The process of fetching data is split into two parts.
The first part waits for the meta-information (status code and other headers).
The second part for fetching the contents (body) of the response.
import fetch from "node-fetch"
const url = "https://jsonplaceholder.typicode.com/posts/1"
const getData = fetch(url)
getData.then((response)=>{
// here the meta information of the response is received,
// but the body might still be transimmited.
// returns the promise that waits for the body to be received as well
return data.json()
})
.then((data) => {
// here the body is received and parsed
console.log(data)
})
This is especially helpful if the files you request are large or take long to download for other reasons. And you get the information of the files early: like name, size (if the server sends it), ... .
This theoretically (I don't know the exact implementation of fetch-node so I cant say it for sure) should enable the possibility to use it to abort the fetch of the body in case it is too large.
I'm working on chrome extension. I have function that needs to wait for asynchronous chrome.storage.set to set data and after that use that stored value.
Example: this function is setting access token in chrome storage
async setAuthTokenInStorage(token) {
const dataToStore = {accessToken : token.access_token, refreshToken : token.refresh_token };
chrome.storage.sync.set({ token: dataToStore });
},
After this function, we immediately run one HTTP request in a different place in application. HTTP request will use new token. But that request fails because data is not stored yet, and http request already failed.
Is there some way that we can wait for setAuthTokenInStorage() to finish executing.Meaning that we will make this method 'synchronous' somehow. Goal is to make this function end only when data is stored.
I'm trying to make assertions on XHR, but can't find a right way on how to grab the correct request.
The task is simple: click a button, then wait for network request and make assertions on it's response and request bodies.
The problem is that before I call changePageSize() function in my tests, there are already multiple requests in my network with the exact same URLs and methods. The only difference between them is request and response body, so it happens that my code just grabs the first request that matches url I provided. Is there an any way on how to specify the exact network request that I want to use in my tests?
Here is the function:
static async changePageSize(selector: string): Promise<any> {
const [resp]: any = await Promise.all([
page.waitForResponse(`**${paths.graph}`),
this.setPagination(selector),
]);
return [resp]
}
And then I'm using it in my tests:
const [response] = await myPage.changePageSize(selector);
expect(await response.text()).toContain(`${size}`);
expect(response.request().postData()).toContain(`${size}`);
I am using wget to download some images, but sometimes the image does not download entirely (it starts at the top, and then stop abruptly...)
Here is my code :
try {
var img = fs.readFileSync(pathFile);
}
catch (err) {
// Download image
console.log('download')
wget({
url: reqUrl,
dest: pathFile,
timeout : 100000
}, function (error, response, body) {
if (error) {
console.log('--- error:');
console.log(error); // error encountered
} else {
console.log('--- headers:');
console.log(response); // response headers
console.log('--- body:');
//console.log(body); // content of package
var img = fs.readFileSync(pathFile);
and so on...
basically, it tries to find the file located at pathFile, and if he does not exist, I download it on my server with wget. But it seems that wget launch the callback before finishing the download...
Thank you!
It seems that you are possibly responding to some requests but you're using blocking function calls (those with "Sync" in their name). I'm not sure if you realize but that is blocking your entire process for the duration of that operation and will completely ruin any chance for concurrency if you ever need it.
Today you can use async/await in Node that looks synchronous but doesn't block your code at all. For example using request-promise and mz modules you can use:
const request = require('request-promise');
const fs = require('mz/fs');
and now you can use:
var img = await fs.readFile(pathFile);
which is not blocking but still lets you easily wait for the file to load before the next instruction is run.
Keep in mind that you need to use it inside of a function declared with the async keyword, e.g.:
(async () => {
// you can use await here
})();
You can get the file with:
const contents = await request(reqUrl);
and you can write it with:
await fs.writeFile(name, data);
There's no need to use blocking calls for that.
You can even use try/catch with that:
let img;
try {
img = await fs.readFile(pathFile);
} catch (e) {
img = await request(reqUrl);
await fs.writeFile(pathFile, img);
}
// do something with the file contents in img
One could even argue that you could remove the last await but you can leave it to wait for potential errors to be raised as exception on the promise rejection.