I've been using twit library to query Twitter. When I try to obtain some user timeline, I use something like this:
var Twit=require('twit');
var gtl=function getTimeline(userName){
var T = new Twit({
consumer_key:yourKey
consumer_secret:yourSecret
access_token:yourToken
access_token_secret:yourTokenSecret
});
var est=[];
T.get('statuses/user_timeline', {screen_name:userName,count:100}, function(err, reply) {
est=reply;
});
console.log(est);
'get' method seems to be acting asynchronously, so 'est' will be empty till request finishes. Nevertheless, I don't find an 'end' event in order to fire any action only AFTER response is completed.
#franaf,
Probably I am late here.
The callback will only get fired and will automatically act as the end. The twit module is smart enough to handle chunks and will make this callback only after the complete response is received.
Related
I am developing a Node.JS application in which I have an endpoint as shown below:
const streamService = require('./stream.service');
module.exports = function (app) {
/**
* Below API will send all streams data according to the parameters passed in query parameters.
*/
app.get("/streams", async function (request, response) {
var device_id = (!req.query.device_id) ? "" : req.query.device_id;
var userid = (!req.query.user) ? "" : req.query.user;
var streams = streamService.getAllStreams(device_id,userid);
response.send(streams);
});
}
This calls getAllStreams function which is written in some other file. While the function getAllStreams is getting executed if any other API hit comes then it is overriding the data which I am getting from query parameters and affecting the response of 'getAllStreams' function. I have also observed that the response is a mixture of both the API calls expected result. I am willing to know if I am doing anything wrong while calling function. Basically it should create a new instance of that function for every call but it is not creating.
I know this is a very strange behavior of JavaScript I have noticed today. I have never ever expected that but I am looking for any solution. Please note that I have checked the above said scenario multiple times by putting some logs at both server and client side.
Thank you in advance for your help. :-)
I need to know if there is a way to notify the UI front end JS file from the "on data" event of the node api method.
This is the code that I have currently on the front end JS
var filter = {
user: username,
time: time
}
$.get('/getZipFolder/'+JSON.stringify(filter),function(data){
console.log(data.filename);
});
On the Node.js side, this is the code:
exports.getZipFolder= function(req, res, next) {
var request = JSON.parse(req.params.obj);
var call = myChatClient.getZipFolderName(request);
call.on('data', function(bitem) {
var zipFileName = 'myFolder.zip'
res.json({"filename":zipFileName});
});
};
The response:
res.json({"filename":zipFileName});
Never reaches the UI front end JS and this statement Never gets called:
console.log(data.filename);
Is there a way to notify the UI in such a manner on the on data event of the node api method?
Note: The Node JS process does not end when the res.json({"filename":zipFileName}); is sent to the UI on the on data event, but it goes on in the backend to do other processing stuff, and the UI cannot wait for it.
I think the problem may happen during the handle of myChatClient.getZipFolderName . Maybe it cannot trigger the callback which means the server never response the UI. Add return to res or not doesn't matter under this condition.
First of all - I am aware of this answer to a kind of similar problem.
Problem
I have a third party protocol, that uses TCP\IP. This protocol defines that the server replies to every message received. On the client side (which I try to implement) I have to wait for the answer from the server.
The problem occurs, when I try to send messages. I need to wait for the answer from the first message before I send the second one (like ping-pong).
I tried to do multiple writes on my NodeJS tcp-client like this, which understandably fails due to async:
client.connect(connectOptions, function () {
client.write(message1);
client.write(message2);
});
Like I said before, I have a third party component, which responses to both messages with a numeric value. So when
client.on('data',function (data) {});
fires an event, I can't distinguish which message, was responsible for the answer. Unlike the linked answer I don't have the ability, to tag the answer on the server side.
I am new to node.js, so I try to figure out the best way to solve this kind of problem, as it´s of the nature: do synchronous things in the async environment.
One way would be to use a common list of handlers to keep track of requests and responses:
var handlers = [];
client.connect(connectOptions, function () {
client.write(message1);
handlers.push(function msg1_handler(data) {});
client.writewrite(message2);
handlers.push(function msg2_handler(data) {});
});
client.on('data',function(data) {
var handler = handlers.shift();
handler(data);
});
All of this should obviously be wrapped in a separate class containing both handlers an client objects. It's just an example of how to do it. The drawback is that if the server fails to respond to some request then you have a complete mess, hard to make it right.
Another idea is to buffer requests:
function BufferedClient(cli) {
this.cli = cli;
this.buffer = [];
this.waiting_for_response = false;
var that = this;
cli.on('data', function(data) {
that.waiting_for_response = false;
var pair = that.buffer.shift();
var handler = pair[0];
process.nextTick(function() {
// we use .nextTick to avoid potential
// exception in handler which would break
// BufferedClient
handler(data);
});
that.flush();
});
};
BufferedClient.prototype = {
request: function(msg, handler) {
this.buffer.push([handler, msg]);
this.flush();
},
flush: function() {
var pair = this.buffer[0];
if (pair && !this.waiting_for_response) {
this.cli.write(pair[1]);
this.waiting_for_response = true;
}
}
};
This time you send requests sequentially (so like synchronous) due to how .request() and .on('data') handler work together with .flush() function. Usage:
client.connect(connectOptions, function () {
var buff_cli = new BufferedClient(client);
buff_cli.request(message1, function(data) { });
buff_cli.request(message2, function(data) { });
});
Now even if the server fails to respond you don't have a mess. However if you issue buff_cli.request parallely and one of them fails then you will have a memory leak (since this.buffer is getting bigger while nothing is draining it because the BufferedClient is waiting for a response). This can be fixed by adding some timeouts on the socket.
Note that both solutions assume that the server never pushes anything to the client without a request.
If I were you I would go with second solution. Note that I haven't tested the code so it might be buggy but the general idea should be ok.
Side note: When you implement a server (and I know that you don't in this case) you should always have a protocol that matches each request with a response in a unique way. One way would be to send a unique ID with each request so that the server would be respond with the same ID. In such scenario matching request with response is very easy and you avoid all that mess.
I'm testing a web page where the user can send a message to another via a textinput. A POST request is then send on the server and the message is dumped on the disk in the var/mail/new folder.
After automatising the sending of the message in the page with Protractor I'm calling browser.waitForAngular() and browser.driver.sleep(4000) to leave time for the backend to write the mail on the disk.
After these calls the check of the email's presence fails. When looking in the Unix shell, I can confirm that the email was sent and also the next test marked with in Jasmine with it confirms the presence of the email.
Why is browser.driver.sleep(4000) not effective to wait for the backend to proceed? How can I correct the following code?
it("is possible to send a message", function() {
shared.loginContributor();
var mailsBeforeMessaging =
fs.readdirSync(browser.params.mail.queue_path + "/new");
console.log('mailsBeforeMessaging');
console.log(mailsBeforeMessaging.length);
console.log(fs.lstatSync(browser.params.mail.queue_path + "/new"));
var usersListing = new UserPages.UsersListing().get();
var annotatorPage = usersListing.getUserPage("annotator");
annotatorPage.sendMessage("title5", "content64");
exec("/tmp/check.sh");
// we expect the message widget to disappear
var button = element(by.css(".user-profile-info-button"));
console.log('waiting');
browser.wait(EC.elementToBeClickable(button), 5000);
console.log('waiting is finished');
expect(EC.elementToBeClickable(button)).toBeTruthy();
// wait for mail to be dumped on the disk?
browser.waitForAngular();
browser.driver.sleep(4000);
exec("/tmp/check.sh");
var mailsAfterMessaging =
fs.readdirSync(browser.params.mail.queue_path + "/new");
console.log('mailsAfterMessaging');
// ERROR: here the number of emails is NOT incremented
console.log(mailsAfterMessaging.length);
console.log(fs.lstatSync(browser.params.mail.queue_path + "/new"));
});
it("xyz", function() {
console.log(fs.lstatSync(browser.params.mail.queue_path + "/new"));
// here the number of emails is incremented
var mailsAfterMessaging =
fs.readdirSync(browser.params.mail.queue_path + "/new");
console.log('mailsAfterMessaging');
console.log(mailsAfterMessaging.length);
});
Most of the Protractor functions do not do anything. They queue something up to be done later, and return promise to do it. After an it block schedules a bunch of things to do, they actually start happening (via the promises they registered in the ControlFlow).
Your checks, however, are all executing immediately. So, they are happening before any of the protractor calls accomplish anything.
Use then to make the waiting and dependencies explicit in your test. Like this:
annotatorPage.sendMessage("title5", "content64").then(function() {
exec("/tmp/check.sh");
});
or:
browser.wait(EC.elementToBeClickable(button), 5000).then(function() {
console.log('wait-for-clickable has completed'); // B
});
console.log('wait-for-clickable has been scheduled'); // A
See the Protractor Control Flow documentation and the Webdriver JS API doc.
Its not you. This is a crazy API to learn because it does not act at all like anyone familiar with normal synchronous programming would expect.
I'm making a temporary fake API and am trying to set up a simple request response script in node using express.js to achieve this. It's very strraightforward, A request comes in, is validated and, if valid, is merged with a .json template file and the result returned, thus giving the impression the user was successfully created.
app.post('/agent/user', function(req, res){
var responseTemplate = new jsonRequired('post_user');
var errorTemplate = new jsonRequired('post_user_error');
var payload = req.body;
var responseData;
var hasErrors = false;
console.log('Creating new user');
//Recursive Merge from http://stackoverflow.com/a/383245/284695
responseData = new mergeRecursive(responseTemplate,payload);
if(!payload.username){
hasErrors = true;
errorTemplate.errors.username.push('A username is required.');
}
if (hasErrors){
res.send(errorTemplate,422);
}else{
res.send(responseData,200);
}
});
The problem I'm having is that data is persisting between calls. So if I define a username and name[first] in 1 request and just a username in the 2nd one, both requests come back with the name[first] property.
I have a feeling it's something to do with js closures. Unfortunately, every tutorial I find seems to be about making closures, not avoiding them.
It should work like this:
The client POST's username=user1&name[first]=joe&name[last]=bloggs
The Server loads a json file containing a prepopulated user object: e.g.
{"username":"demo","name":{"first":"John","last":"Doe"}...}
mergeRecursive() merges the payload from the POST request over the template object and returns the new object as the POST response text.
The problem is that with every new request, the server is using the result of step 3 in step 2 instead of reloading the .json file.
That mergeRecursive function has the same caveat as jQuery.extend: it modifies the first object sent into it. In fact, you don't even need to use its return value.
You didn't show the code of jsonRequired function (it's not even clear why you've used new when invoking it), but it looks like this function doesn't create a new object each time it's called, instead fetching this object from some outer repository. Obviously, mergeRecursive modifications for it won't be lost after that function ends.
The solution is using your data object for merging. Like this:
var responseData = {};
...
mergeRecursive(responseData, responseTemplate);
mergeRecursive(responseData, payload);
Merging two objects will make this for you.
If your responseTemplate has parameter, which actual request did not have, then you will end up having it there.
Check definition of word merge ;)
While this doesn't resolve the issue I had, I have found a workaround using the cloneextend package available via npm:
$ npm install cloneextend
This allows me to use the following js:
var ce = require('cloneextend');
...
ce.extend(responseData, responseTemplate);
ce.extend(responseData, payload);