Hi is there any way to close a post request in node.Js, this is how i do it and it happens every 10 seconds. i dont need the response, just need to make a call to URL only. if we use curl in linux there is a way to close as $curl->close()
function callUrl(url, data){
var request = require('request');
request.post(
url,
{form: {data: data}},
function (error, response, body) {
if (!error && response.statusCode == 200) {
console.log(body)
}
}
);
}
to deploy node.JS i used forever https://www.npmjs.com/package/forever
The issue i am having is, since i am a newbie i dont know why this happen, but the server keeps always a load average at 90% or above, this behavior is after running the node app.js with forever.
forever start app.js
is anyone had similar issues ? or how did u deploy if not used forever to deploy?
server load average is checked by typing w
This is how i call the function.
var cTask = setInterval(function() {
var utcDate = moment.utc();
console.log('cron timer start: ' + moment.utc().format("YYYY-MM-DD HH:mm:ss"));
jsonObject.startAt = tmpStartAt;
jsonObject.endAt = tmpEndAt;
data = JSON.stringify(jsonObject);
callUrl(callbackUrl1, data);
if (endAt = "my time to end") {
clearInterval(cTask);
callUrl(callbackUrl2, data);
}
}, callback * 1000);
well I don't really know what value the variable callback is having. But yes an alternative for forever is the PM2 module. It is a great handy tool for deployment and can also monitor CPU usage with pm2 monit.
Related
I was going to use PaperCut API though as far as I can judge XML-RPC doesn't support Node.JS or I couldn't find an appropriate client for the purpose. Here's the link with PaperCut API:
https://www.papercut.com/support/resources/manuals/ng-mf/common/topics/tools-web-services.html
I was wondering who had been able to get it working in JavaScript. I'm using Node.js in QNAP (in Container Station). If it can be run in Python should I install Python container? Could I use a snippet of code in Python requesting it from Node.js?
I work for PaperCut Software
Sorry it took me so long to reply to this, but I eventually found a free afternoon to knock up some code.
var xmlrpc = require('xmlrpc')
const authToken = 'token'
const hostAddress = "172.24.96.1"
// Waits briefly to give the XML-RPC server time to start up and start
// listening
setTimeout(function () {
// Creates an XML-RPC client. Passes the host information on where to
// make the XML-RPC calls.
var client = xmlrpc.createClient({ host: hostAddress, port: 9191, path: '/rpc/api/xmlrpc'})
// Sends a method call to the PaperCut MF/NG server
client.methodCall(`api.${process.argv[2]}`, [authToken].concat(process.argv.slice(3)), function (error, value) {
// Results of the method response
if (undefined === error || null === error) {
console.log(`Method response for \'${process.argv[2]}\': ${value}`)
}
else
{
console.log(`Error response for \'${process.argv[2]}\': ${error}`)
}
})
}, 1000)
To run this from the command line try something like
node main.js getUserProperty alec balance
I am running a cron job every 5 mins to get data from 3rd party API, It can be N number of request at a time from NodeJS application. Below are the details and code samples:
1> Running cron Job every 5 mins:
const cron = require('node-cron');
const request = require('request');
const otherServices= require('./services/otherServices');
cron.schedule("0 */5 * * * *", function () {
initiateScheduler();
});
2> Get the list of elements for which I want to initiate the request. Can receive N number of elements. I have called request function (getSingleElementUpdate()) in the forEach loop
var initiateScheduler = function () {
//Database call to get elements list
otherServices.moduleName()
.then((arrayList) => {
arrayList.forEach(function (singleElement, index) {
getSingleElementUpdate(singleElement, 1);
}, this);
})
.catch((err) => {
console.log(err);
})
}
3> Start initiating the request for singleElement. Please note I don't need any callback if I received a successful (200) response from the request. I just have to update my database entries on success.
var getSingleElementUpdate = function (singleElement, count) {
var bodyReq = {
"id": singleElement.elem_id
}
var options = {
method: 'POST',
url: 'http://example.url.com',
body: bodyReq,
dataType: 'json',
json: true,
crossDomain: true
};
request(options, function (error, response, body) {
if (error) {
if (count < 3) {
count = count + 1;
initiateScheduler(singleElement, count)
}
} else{
//Request Success
//In this: No callback required
// Just need to update database entries on successful response
}
});
}
I have already checked this:
request-promise: But, I don't need any callback after a successful request. So, I didn't find any advantage of implementing this in my code. Let me know if you have any positive point to add this.
I need your help with the following things:
I have checked the performance when I received 10 elements in arrayList of step 2. Now, the problem is I don't have any clear vision about what will happen when I start receiving 100 and 1000 of elements in step 2. So, I need your help in determining whether I need to update my code for that scenario or not or is there anything I missed out which degrade the performance. Also, How many maximum requests I can make at a time. Any help from you is appreciable.
Thanks!
AFAIK there is no hard limit on a number of request. However, there are (at least) two things to consider: your hardware limits (memory/CPU) and remote server latency (is it able to respond to all requests in 5 mins before the next batch). Without knowing the context it's also impossible to predict what scaling mechanism you might need.
The question is actually more about app architecture and not about some specific piece of code, so you might want to try softwareengineering instead of SO.
I'm new to Node and am having some difficulties with getting the Request library to return an accurate response time.
I have read the thread at nodejs request library, get the response time and can see that the request library should be able to return an "elapsed time" for the request.
I am using it in the following way :
request.get({
url : 'http://example.com',
time : true
},function(err, response){
console.log('Request time in ms', response.elapsedTime);
});
The response.elapsedTime result is in the region of 500-800ms, however I can see the request is actually taking closer to 5000ms.
I am testing this against an uncached nginx page which takes roughly 5 seconds to render the page when profiling via a browser (Chrome).
Here is an example of the timing within Chrome (although the server is under load hence the 10s)
Chrome Profiling example
It looks to me like this isn't actually timing the full start to finish of the request but it "timing" something else. It might be the time taken to download the page once the server starts streaming it.
If this is the case, how can I get the actual start to finish time that this request has taken ? The time I need is from making the request to receiving the entire body and headers.
I am running the request like this with listofURLs being an array of urls to request:
for (var i = 0; i < listofURLs.length; i++) {
collectSingleURL(listofURLs[i].url.toString(),
function (rData) {
console.log(rData['url']+" - "+rData['responseTime']);
});
}
function collectSingleURL(urlToCall, cb) {
var https = require('https');
var http = require('http');
https.globalAgent.maxSockets = 5;
http.globalAgent.maxSockets = 5;
var request = require('request');
var start = Date.now();
// Make the request
request.get({
"url": urlToCall,
"time": true,
headers: {"Connection": "keep-alive"}
}, function (error, response, body) {
//Check for error
if (error) {
var result = {
"errorDetected": "Yes",
"errorMsg": error,
"url": urlToCall,
"timeDate": response.headers['date']
};
//callback(error);
console.log('Error in collectSingleURL:', error);
}
// All Good - pass the relevant data back to the callback
var result = {
"url": urlToCall,
"timeDate": response.headers['date'],
"responseCode": response.statusCode,
"responseMessage": response.statusMessage,
"cacheStatus": response.headers['x-magento-cache-debug'],
"fullHeaders": response.headers,
"bodyHTML": body,
"responseTime" : Date.now() - start
};
cb(result);
//console.log (cb);
});
}
You are missing a key point - it take 5 seconds to render, not to just download the page.
The request module of node is not a full browser, it's a simple HTTP request, so when you for example request www.stackoverflow.com, it will only load the basic HTML returned by the page, it will not load the JS files, CSS file, images etc.
The browser on the otherhand, will load all of that after the basic HTML of the page is loaded (some parts will load before the page has finished loading, together with the page).
Take a look on the network profiling below of stackoverflow - the render finishes at ~1.6 seconds, but the basic HTML page (the upper bar) has finished loading around 0.5 second. So if you use request to fetch a web page, it actually only loading the HTML, meaning - "the upper bar".
Just time it yourself:
var start = Date.now()
request.get({
url : 'http://example.com'
}, function (err, response) {
console.log('Request time in ms', Date.now() - start);
});
Here is my code, in wpt.runtest function i am hitting some url and getting url in response, now i will use this url in my request function, but i need to wait for sometime as it takes some time before the data is available at the url which is to be used in request module.
wpt.runTest('http://some url ',function(err, data) {
//console.log("hello -->",err || data);
data_url = data.data.summaryCSV;
console.log(data_url);
console.log('-----------');
console.log(data_url);
console.log('-----------');
request({uri:data_url,method:'GET'}, function (error, response,body) {
console.log('----######----');
console.log(response.headers);
console.log('----######----');
//console.log(response);
if (error) {
console.log('got an error' + error);
}
//console.log(response);
//console.log(body);
var data = body;
console.log('here is the body' + body);
what i want is, there should be a time gap or pause before my request function gets executed, how can i achieve this
Above problem can be solved by
Using Cron job
setTimeout function
Using settimeout
wpt.runTest('http://some url ',function(err, data) {
//console.log("hello -->",err || data);
data_url = data.data.summaryCSV;
console.log(data_url);
console.log('-----------');
console.log(data_url);
console.log('-----------');
var delay = 5000;
settimeout(function(){
request({uri:data_url,method:'GET'}, function (error, response,body){
// handle error and response
}
}, delay);
});
using Cron
This will be really helpful if you have some extended cases of this particular problem. Ex, you need to keep track of resource at various time, lets say after every hour or day. If you are expecting such cases then considering Cron will be helpful.
Steps of execution with cron:
Register a cron using the cron pattern.
Start the cron.
When pattern evaluates to true, your supplied function will be executed. Put your implementation (making request for resource using request library) in supplied function.
Helper libraries for cron in nodejs are Lib1, Lib2.
Examples lib1, lib2
I have a node.js process that uses a large number of client requests to pull information from a website. I am using the request package (https://www.npmjs.com/package/request) since, as it says: "It supports HTTPS and follows redirects by default."
My problem is that after a certain period of time, the requests begin to hang. I haven't been able to determine if this is because the server is returning an infinite data stream, or if something else is going on. I've set the timeout, but after some number of successful requests, some of them eventually get stuck and never complete.
var options = { url: 'some url', timeout: 60000 };
request(options, function (err, response, body) {
// process
});
My questions are, can I shut down a connection after a certain amount of data is received using this library, and can I stop the request from hanging? Do I need to use the http/https libraries and handle the redirects and protocol switching myself in order the get the kind of control I need? If I do, is there a standardized practice for that?
Edit: Also, if I stop the process and restart it, they pick right back up and start working, so I don't think it is related to the server or the machine the code is running on.
Note that in request(options, callback), the callback will be fired when request is completed and there is no way to break the request.
You should listen on data event instead:
var request = require('request')
var stream = request(options);
var len = 0
stream.on('data', function(data) {
// TODO process your data here
// break stream if len > 1000
len += Buffer.byteLength(data)
if (len > 1000) {
stream.abort()
}
})