I have this endpoint that will write a static file with json data in to it.
fs.writeFile("name.json", JSON.stringify(JSON.parse(data)), function (err) {
if (err) {
return console.log(err);
}
console.log("The file was saved!");
});
And then i have another endpoint that will send the data inside this json file.
const fileData = require("./name.json")
res.json(fileData)
However after i trigger a change inside the file and then try to get that new data its not getting the new data instead sends the old one. But if i refresh the server and then try to get it it will send me the new data. I can see inside the file that changes are there after i write data but it still doesn't send the new data. It feels like some kind of caching. Ive tried to disable etag but still no success.
app.set('etag', false)
app.use(express.static("*", {
etag: false,
}))
When you start the server, filedata is read once and it is then available in memory, regardless of file changes, which is why you can get fresh data only after restarting the server.
It's caching, but it's not traffic caching, see: What is require?
What you actually need is to read the file every time you access it.
So, instead of require, you could just read the file every time you access it, and get fresh data (and since it's JSON, you'd need to parse it before sending it):
const fileData = fs.readFileSync("./name.json");
res.json(JSON.parse(fileData));
Related
I want upload the content of an excel file into the server in order to get its data and do some stuff...
I came up with the following code, however it seems like it is not working properly as the following error displays in the console Error: Can't set headers after they are sent.
The file is getting uploaded into the folder and the json message is being displayed... However I do not know if I am going to face any issue in the future...
Actually I just need the excel data no need for the excel being uploaded... Maybe you could give me a workaround, guys...
const router = express.Router();
const storage = multer.diskStorage({
destination(req, file, cb) {
cb(null, 'uploads/');
},
filename(req, file, cb) {
cb(
null,
`${file.fieldname}-${Date.now()}${path
.extname(file.originalname)
.toLowerCase()}`
);
},
});
const excelFilter = (req, file, cb) => {
if (
file.mimetype.includes('excel') ||
file.mimetype.includes('spreadsheetml')
) {
cb(null, true);
} else {
cb('Please upload only excel file.', false);
}
};
const upload = multer({
storage,
fileFilter: excelFilter,
});
router.post('/', upload.single('file'), (req, res) => {
var workbook = XLSX.readFile(req.file.path);
var sheet_name_list = workbook.SheetNames;
var xlData = XLSX.utils.sheet_to_json(workbook.Sheets[sheet_name_list[0]]);
res.json(xlData).sendFile(`/${req.file.path}`, { root: path.resolve() });
});
May I have a res.json and res.sendFile together in the same api endpoint in express?
No, you cannot. Each of those methods, sends a complete http response (including calling res.end() which terminates the http request) and you can only send one http response to each incoming request. The particular error you're getting has to do with the res.sendFile() trying to configure the response that it's getting ready to send and finding that the http response object has already been used for sending a response and can't be used again.
Ordinarily, if you wanted to sent two different pieces of data, you would just combine them into a single Javascript object and just call res.json() on the object that contains both pieces of data.
But, sending a binary file is not something you can easily put in a JSON package. You could construct a multipart response where one part was the JSON and one part was the file. You could JSON encode binary data (though that's inefficient). I presume there are probably some modules that would help you do that, but for most clients, that isn't what they are really expecting or equipped to handle.
The only way to a proper solution is for us to understand what client/server workflow you're trying to implement here and why you're trying to send back the same file that was just uploaded. There would normally not be a reason to do that since the client already has that data (they just uploaded it).
My task is to download json-file from website (pubchem) using only the query string (h2o for example) and JS. I know it's possible to do with parsing, but this is too much code because of number of pages i need to parse for getting destination. Is there any other options to solve the problem?
Using google didnt give me any of idea ):
You will still need to do some parsing if you really want to automate this, since only using a query parameter will get you to the main page that lists the 'articles' and you need to go in to find the URL that will give you the JSON format. But! I think you can "reverse engineer" it since the URLS for the article and its JSON format are very similar.
I checked out the website and tried to download one of the files that they have for https://pubchem.ncbi.nlm.nih.gov/compound/3076959 and it turns out to get the JSON representation this was the URL https://pubchem.ncbi.nlm.nih.gov/rest/pug_view/data/compound/748328/JSON/
As you can see they are very similar and you might be able to figure out how different topics such as compound for example construct the JSON output endpoint.
To download the JSON files using NodeJS is to use the node-fetch module or axios library to send your http requests to the JSON endpoint and from there you can save the response to a file on your machine.
Here is an example of how you can do this with axios and the NodeJS fs module in order to save the file to your machine.
const fs = require("fs");
const fetch = require("node-fetch");
async function downloadASJson(url, fileName) {
const response = await fetch(url);
const jsonContent = await response.buffer();
fs.writeFile(`${fileName}.json`, jsonContent, "utf8", function (err) {
if (err) {
console.log("An error occured while writing JSON Object to File.");
return console.log(err);
}
console.log("JSON file has been saved.");
});
}
try {
downloadASJson(
"https://pubchem.ncbi.nlm.nih.gov/rest/pug_view/data/compound/748328/JSON/",
"2-Methyl-3-(5'-bromobenzofuroyl-2')-4-dimethylaminomethyl-5-hydroxybenzofuran HCl H20"
);
} catch (err) {
console.log(error);
}
You save the following code in a file called app.js for example, and you can use node app.js to run it. Don't forget to install the dependencies.
So my scenario is a user clicks a button on a web app, this triggers a server side POST request to an internal (i.e non public) API sitting on another server in the same network, this should return a PDF to my server which will proxy (pipe) it back to the user.
I want to just proxy the PDF body content directly to the client without creating a tmp file.
I have this code which works using the npm request module but it does not feel right:
var pdfRequest = request(requestOptions);
pdfRequest.on('error', function (err) {
utils.sendErrorResponse(500, 'PROBLEM PIPING PDF DOWNLOAD: ' + err, res);
});
pdfRequest.on('response', function (resp) {
if (resp.statusCode === 200) {
pdfRequest.pipe(res);
} else {
utils.sendErrorResponse(500, 'PROBLEM PIPING PDF DOWNLOAD: RAW RESP: ' + JSON.stringify(resp), res);
}
});
Is the the correct way to pipe the PDF response?
Notes:
I need to check the status code to conditionally handle errors, the payload for the POST is contained in the requestOptions (I know this part is all correct).
I would like to keep using the request module
I defiantly do not want to be creating any temp files
If possible I would also like to modify the content disposition header to set a custom filename, i know how to do this without using pipes
I am using request module and i am getting empty body in my response,here is the code
request.get(data_url, function (error, response,body) {
console.log('----######----');
console.log(response);
console.log('----######----');
console.log(body);
actually when i manually hit the url contained in my data_url variable, a csv file gets downloaded automatically, is the problem due to this nature, if not please suggest a suitable solution.
Also if i replace the data_url with the actual url contained in the variable then i am getting the body part in response.
I want to use nodeJS as tool for website scrapping. I have already implemented a script which logs me in on the system and parse some data from the page.
The steps are defined like:
Open login page
Enter login data
Submit login form
Go to desired page
Grab and parse values from the page
Save data to file
Exit
Obviously, the problem is that every time my script has to login, and I want to eliminate that. I want to implement some kind of cookie management system, where I can save cookies to .txt file, and then during next request I can load cookies from file and send it in request headers.
This kind of cookie management system is not hard to implement, but the problem is how to access cookies in nodejs? The only way I found it is using request response object, where you can use something like this:
request.get({headers:requestHeaders,uri: user.getLoginUrl(),followRedirect: true,jar:jar,maxRedirects: 10,},function(err, res, body) {
if(err) {
console.log('GET request failed here is error');
console.log(res);
}
//Get cookies from response
var responseCookies = res.headers['set-cookie'];
var requestCookies='';
for(var i=0; i<responseCookies.length; i++){
var oneCookie = responseCookies[i];
oneCookie = oneCookie.split(';');
requestCookies= requestCookies + oneCookie[0]+';';
}
}
);
Now content of variable requestCookies can be saved to the .txt file and can loaded next time when script is executed, and this way you can avoid process of logging in user every time when script is executed.
Is this the right way, or there is a method which returns cookies?
NOTE: If you want to setup your request object to automatically resend received cookies on every subsequent request, use the following line during object creation:
var request = require("request");
request = request.defaults({jar: true});//Send cookies on every subsequent requests
In my case, i've used 'http'library like the following:
http.get(url, function(response) {
variable = response.headers['set-cookie'];
})
This function gets a specific cookie value from a server response (in Typescript):
function getResponseCookieValue(res: Response, param: string) {
const setCookieHeader = res.headers.get('Set-Cookie');
const parts = setCookieHeader?.match(new RegExp(`(^|, )${param}=([^;]+); `));
const value = parts ? parts[2] : undefined;
return value;
}
I use Axios personally.
axios.request(options).then(function (response) {
console.log(response.config.headers.Cookie)
}).catch(function (error) {
console.error(error)
});