Using node.js and wget, wait for the download to end - javascript

I am using wget to download some images, but sometimes the image does not download entirely (it starts at the top, and then stop abruptly...)
Here is my code :
try {
var img = fs.readFileSync(pathFile);
}
catch (err) {
// Download image
console.log('download')
wget({
url: reqUrl,
dest: pathFile,
timeout : 100000
}, function (error, response, body) {
if (error) {
console.log('--- error:');
console.log(error); // error encountered
} else {
console.log('--- headers:');
console.log(response); // response headers
console.log('--- body:');
//console.log(body); // content of package
var img = fs.readFileSync(pathFile);
and so on...
basically, it tries to find the file located at pathFile, and if he does not exist, I download it on my server with wget. But it seems that wget launch the callback before finishing the download...
Thank you!

It seems that you are possibly responding to some requests but you're using blocking function calls (those with "Sync" in their name). I'm not sure if you realize but that is blocking your entire process for the duration of that operation and will completely ruin any chance for concurrency if you ever need it.
Today you can use async/await in Node that looks synchronous but doesn't block your code at all. For example using request-promise and mz modules you can use:
const request = require('request-promise');
const fs = require('mz/fs');
and now you can use:
var img = await fs.readFile(pathFile);
which is not blocking but still lets you easily wait for the file to load before the next instruction is run.
Keep in mind that you need to use it inside of a function declared with the async keyword, e.g.:
(async () => {
// you can use await here
})();
You can get the file with:
const contents = await request(reqUrl);
and you can write it with:
await fs.writeFile(name, data);
There's no need to use blocking calls for that.
You can even use try/catch with that:
let img;
try {
img = await fs.readFile(pathFile);
} catch (e) {
img = await request(reqUrl);
await fs.writeFile(pathFile, img);
}
// do something with the file contents in img
One could even argue that you could remove the last await but you can leave it to wait for potential errors to be raised as exception on the promise rejection.

Related

FetchError: request to url failed, reason ECONNREFUSED crashes server despite try catch being in place

The problem
FetchError: request to https://direct.faforever.com/wp-json/wp/v2/posts/?per_page=10&_embed&_fields=content.rendered,categories&categories=638 failed, reason: connect ECONNREFUSED
I'm doing some API calls for a website using fetch. Usually there are no issues, when a request "fails" usually the catch error gets it and my website continues to run. However, when the server that hosts the API calls is down/off, my fetch API calls crash the website entirely (despite being on a try catch loop).
As far as I'm concerned, shouldnt the catch block "catch" the error and continue to the next call? Why does it crash everything?
My wanted solution
For the website to just move on to the next fetch call / just catch the error and try again when the function is called again (rather than crashing the entire website).
The code
Here is an example of my fetch API call (process.env.WP_URL is = https:direct.faforever.com )
async function getTournamentNews() {
try {
let response = await fetch(`${process.env.WP_URL}/wp-json/wp/v2/posts/?per_page=10&_embed&_fields=content.rendered,categories&categories=638`);
let data = await response.json();
//Now we get a js array rather than a js object. Otherwise we can't sort it out.
let dataObjectToArray = Object.values(data);
let sortedData = dataObjectToArray.map(item => ({
content: item.content.rendered,
category: item.categories
}));
let clientNewsData = sortedData.filter(article => article.category[1] !== 284);
return await clientNewsData;
} catch (e) {
console.log(e);
return null;
}
}
Here's the whole code (this whole thing is being called by express.js in line 246 (the extractor file).
Extractor / Fetch API Calls file
https://github.com/FAForever/website/blob/New-Frontend/scripts/extractor.js
Express.js file in line 246
https://github.com/FAForever/website/blob/New-Frontend/express.js#:~:text=//%20Run%20scripts%20initially%20on%20startup

Synchronous download text file JavaScript

I'm making a really light weight web page that downloads a file and then displays content based on the file. There is no displayed content without the file, and I have no plans to scale up this webpage. I do not want to use async methods unless I absolutely have to.
How do you download a file in such a way that JavaScript will pause until the file is downloaded.
Just wrap your entire script in an async IIFE and await the single network request:
// immediately invoked async function expression:
(async () => {
// all of your sync code before the fetch request to get the file
const response = await fetch(url);
// handle response, parse data, etc.
// all of the rest of your sync code after the fetch request,
// which won't execute until after the fetch promise is
// resolved as the response (or rejected)
})();

How to create a file in the cloud function and upload to a bucket

For some time now, I've been trying to create a file (sample.txt) containing some text (hello world) and finally uploading it into a bucket. Is there a way I can implement this?. My attempted code is below:
exports.uploadFile = functions.https.onCall(async (data, context) => {
try {
const tempFilePath = path.join(os.tmpdir(), "sample.txt");
await fs.writeFile(tempFilePath, "hello world");
const bucket = await admin.storage().bucket("allcollection");
await bucket.upload(tempFilePath);
return fs.unlinkSync(tempFilePath);
} catch (error) {
console.log(error);
throw new functions.https.HttpsError(error);
}
});
Anytime this code is run I get and error like this in there firebase console:
TypeError [ERR_INVALID_CALLBACK]: Callback must be a function
at maybeCallback (fs.js:128:9)
at Object.writeFile (fs.js:1163:14)
You're not using fs.writeFile() correctly. As you can see from the linked documentation, it takes 3 or 4 arguments, one being a callback. The error message is saying you didn't pass a callback. On top of that, it doesn't return a promise, so you can't effectively await it. Consider using fs.writeFileSync() instead to make this easier.

Like we have readFileSync function in node js, I need http.getSync - a wrapper for http.get() to make it sync

I want to make the implementation of this https.getSync the wrapper method, so that it calls the api synchronously, same like the readFileSync method which we use for reading file synchronously,
const https = require('https');
How should i implement this method -
https.getSync = (url) => {
let data = '';
https.get(url, resp => {
resp.on('data', (chunk) => {
data += chunk;
});
resp.on('end', () => {
console.log(JSON.parse(data));
});
}).on("error", (err) => {
console.log("Error: " + err.message);
});
return data;
}
I want the below two calls to be made synchronously, without changing the below code where we are calling the getSync method. Here for this calling i don't want to use promises or callback.
let api1 = https.getSync('https://api.nasa.gov/planetary/apod?api_key=DEMO_KEY');
let api2 = https.getSync('https://api.nasa.gov/planetary/apod?api_key=NNKOjkoul8n1CH18TWA9gwngW1s1SmjESPjNoUFo');
You can use npm package sync-request.
It's quite simple.
var request = require('sync-request');
var res = request('GET', 'http://example.com');
console.log(res.getBody());
Here is the link: sync-request
Read this also: An announcement from the package, in case readers would think using it is a good idea. You should not be using this in a production application. In a node.js application you will find that you are completely unable to scale your server. In a client application you will find that sync-request causes the app to hang/freeze. Synchronous web requests are the number one cause of browser crashes.
According to me also you should avoid making http sync request. Instead clear your concepts of using callback, promise, async/await.

nodejs retrieve body from inside request scope

I'm new to nodejs and javascript in general. I believe this is an issue with the scope that I'm not understanding.
Given this example:
...
...
if (url == '/'){
var request = require('request');
var body_text = "";
request('http://www.google.com', function (error, response, body) {
console.log('error:', error);
console.log('statusCode:', response && response.statusCode);
console.log('body:', body);
body_text=body;
});
console.log('This is the body:', body_text)
//I need the value of body returned from the request here..
}
//OUTPUT
This is the body: undefined
I need to be able to get the body from response back and then do some manipulation and I do not want to do all the implementation within the request function. Of course, if I move the log line into:
request( function { //here })
It works. But I need to return the body in some way outside the request. Any help would be appreciated.
You can't do that with callbacks because this will works asynchronously.
Work with callbacks is kind of normal in JS. But you can do better with Promises.
You can use the request-promise-native to do what you want with async/await.
async function requestFromClient(req, res) {
const request = require('request-promise-native');
const body_text = await request('http://www.google.com').catch((err) => {
// always use catches to log errors or you will be lost
})
if (!body_text) {
// sometimes you won't have a body and one of this case is when you get a request error
}
console.log('This is the body:', body_text)
//I need the value of body returned from the request here..
}
As you see, you always must be in a function scope to use the async/await in promises.
Recommendations:
JS the right way
ES6 Fetures
JS clean coding
More best practices...
Using promises

Categories