I am writing a node.js server and i am experiencing a weird problem
heres the code
socket.write(">> first message \0","utf8",function(){ } );
socket.write(">> second message \0","utf8",function(){ });
when i listen at the client side (ADOBE FLASH SOCKET) . it only recieves 1 message twice
and if i reverse the order of the code the message coming later only gets recieved twice , any clue on how to solve this ?
I am sure that this bug aint in programming because i have checked it like a hundred times, i also tried to make a stack explicitly then found out that node.js is supposed to mantain a stack internally ).
My node version is 0.5.1 running on windows 7 (the windows binary distributed on the website)
Try sending the second piece of data in the callback of the first:
socket.write(">> first message \0","utf8",function(){
socket.write(">> second message \0","utf8",function(){ });
} );
Related
Using Handlebars, express, and node.js; I've got a shell script that's triggered after an HTML form is filled out. The script -- which runs on the server -- outputs nicely via console.log:
builder.stdout.on('data', function(data) {
console.log('stdout: ' + data);
});
}
The console shows exactly what it should. However, I need to feed that output in (near) real-time to the client in a window so [s]he can watch the progress of the script. I have the following in my template:
<div>
<fieldset style="width:1200px" id="outputWindow">
... want the output text here ...
</div>
I've searched for a "clean" way to implement redirection to a client browser but thus far have been unsuccessful in finding one; most of what I've found revolves around polling updates, and I'd prefer to avoid that. Not asking for a complete solution, but can someone point me in the general direction of an accepted way to redirect this asynchronous output so that it will display on the client's browser in that area?
Thanks!
To show realtime output of console.log in browser you have many ways.
The first way is using websocket.
Can you see example how to implement server side and client side:
https://github.com/websockets/ws/tree/master/examples
But if you want to receive only output of console.log I advise you to use EventSource
Can you see example how to implement server side and client side:
https://github.com/EventSource/eventsource/tree/master/example
I have a program which is using the Websocket TCP: The client is an extension in Chrome and the server is an application written in C++.
When I send small data from the client to the server, it works fine. But when I send large amounts of data (e.g. a source html page), it will be slightly delayed.
For Example:
Client sends: 1,2,3
Server receives: 1,2
Client sends: 4
Server receives: 3
Client sends: 5
Server receives: 4
It's seems like it's a delay.
This is my code client:
var m_cWebsocket = new WebSocket("Servername");
if (m_cWebsocket == null) { return false; }
m_cWebsocket.onopen = onWebsocketOpen(m_cWebsocket); m_cWebsocket.onmessage = onWebsocketMessage;
m_cWebsocket.onerror = onWebsocketError;
m_cWebsocket.onclose = onWebsocketError;
I using m_cWebsocket.send(strMsg) to send data.
Server code
while (true) { recv(sSocket, szBufferTmp, 99990, 0); //recv(sSocket,
szBufferTmp, 99990, MSG_PEEK); //some process }
Since you haven't posted any code to show your implementation of the TCP server or client I can only speculate and try to explain what might be going on here.
That means the potential problems and solutions I outline below may or may not apply to you, but regardless this information should still be helpful to others who might find this question in the future.
TL;DR: (most likely) It's either the server is too slow, the server is not properly waiting for complete 'tcp packets' to be buffered, or the server doesn't know when to properly start and stop and is de-synching while it waits for what it thinks is a 'full packet' as defined by something like a buffer size.
It sounds to me like you are pushing data from the client either faster than the server the server can read, or more likely, the server is buffering a set number of bytes from the current TCP Stream and waiting for the buffer to fill before outputting additional data.
If you are sending this over localhost it's unlikely you are not close to limit of the stream though, and I would expect a server written in C++ would be able to keep up with the javascript client.
So this leads me to believe that the issue is in fact the stream buffer on the C++ side.
Now since the server has no way to know to what data you are sending and or how much of it you are sending, it is common for a TCP stream to utilize a stream buffer that contiguously reads data from the socket until either the buffer has filled to a known size, or until it sees a predefined 'stop character'. This would usually be something like a "line end" or \n character, sometimes \n\r (line feed, carriage feed) depending on your operating system.
Since you haven't specified how you are receiving your data, I'm going to just assume you created either a char or byte buffer of a certain size. I'm a pretty rusty on my C++ socket information so I might be wrong, but I do believe there is a default 'read timeout' on C++ tcp streams as well.
This means you are possibly running into 1 of 2 issues.
Situation 1) You are waiting until that byte/char buffer is filled before outputing it's data. Issue is that will act like a bus that only leaves the station when all seats are filled. If you don't fill all the seats, you server is just sitting and waiting until it gets more data to fill up fully and output your data.
Situation 2) You are running up against the socket read timeout and therefore the function is not getting all the data before outputting the data. This is like a bus that is running by the clock. Every 10 minutes that bus leaves the station, doesn't matter if that bus is full or empty, it's leaving and the next bus will pick up anyone who shows up late. In your case, the TCP stream isn't able to load 1, 2 and 3 onto a bus fast enough, so the bus leaves with just 1, 2 on it because after 20ms of not receiving data, the server is exiting from the function and outputing the data. On the next loop however, there is 3 waiting at the top of the stream buffer ready to get on the next bus out. The Stream will load 3, wait til those 20ms are finished, and then exit before repeating this loop.
I think it's more likely the first situation is occurring though, as I would expect the server to either start catching up, or falling further behind as the 2 servers either begin to sync together, or have internall TPC stream buffer fill up as the server falls further and further behind.
Main point here, you need some way to synchronize the client and the server connections. I would recommend sending a "start byte" and "End byte" to single when a message has begun and finished, so you don't exit the function too early.
Or send a start byte, followed by the packet size in bytes, then filling up the buffer until your buffer has the correct numbers of bytes. Additionally you could include an end byte as well for some basic error checking.
This is a pretty involved topic and hard to really give you a good answer without any code from you, but this should also help anyone in the future who might be having a similar issue.
EDIT I went back and re-read your question and noticed you said it was only with large amounts of data, so I think my original assumption was wrong, and it's more likely situation 2 because the client is sending the data to your server faster than the server can read it, and thus might be bottle necking the connection and the client is only able to send additional data once the server has emptied part of it's TCP stream buffer.
Think of it like a tube of of water. The socket (tube) can only accept (fill up) with so much data (water) before it's full. Once you let some water out the bottom though, you can fill it up a little bit more. The only reason it works for small files is that the file is too small to fill the entire tube.
Additional thoughts: You can see how I was approaching this problem in C# in this question: Continuously reading from serial port asynchronously properly
And another similar question I had previously (again in C#): How to use Task.WhenAny with ReadLineAsync to get data from any TcpClient
It's been awhile since I've played with TCP streams though, so my apologies in that I don't remember all the niche details and caveats of the protocal, but hopefully this information is enough to get you in the ball park for solving your problem.
Full disclaimer, it's been over 2 years since I last touched C++ TCP sockets, and have since worked with sockets/websockets in other languages (such as C# and JavaScript), so I may have some facts wrong about the behavior of C++ TCP sockets specifically, but the core information should still apply. If I got anything wrong, someone in the comments will most likely have the correct information.
Lastly, welcome to stack overflow!
I have opened asked this question on the Sails project, but it might not be related to Sails at all -- maybe related to Node.js, Express.js or my own code, so I wonder if anyone has ever experienced this:
I have noticed that the browser requests are "replayed" on the server. In order to test it, I did this:
Created a simple controller action that prints "this is a request" on the server -- but does nothing else: it doesn't even send a response to the browser.
When I hit that route, the server console prints "this is a request", as expected.
Keep waiting. The browser keeps looping. After 2.5 minutes, "this is a request" is printed again on the server. Is this an expected behavior?
Worst:
Reload the server.
Hit the route.
"this is a request" is printed on the console.
Now ctrl+c the server to shut it down and right away sails lift it again while the browser is still looping.
The browser will stop, and right after the server is lifted.... "this is a request" is printed on the console.
More info: I am using sails sockets-io and my code has a "reconnect" function:
io.socket.on('disconnect', function(){
socketCleanup();
io.socket.on('connect', function(){
socketConnect();
});
});
This is used to re-subscribe the user socket to special rooms ( io.socket.post(socketRooms... ) but I don't think it would be responsible for these "replays".
If anything of the above is not the expected behavior, could it be possible that, after an upload is cancelled, there is something trying to replay the upload, causing the server to crash?
I am running Sails on Windows 7.
Something I never knew:
node.js page refresh calling resources twice?
Could this all be caused by favicon???
It seems there is a known bug for Chrome regarding favicons causing duplicate requests: https://bugs.chromium.org/p/chromium/issues/detail?id=64810
However, this doesn't seem related, since the second request here is happening 2.5 minutes later. I have tested with Firefox and IE 9 and the issue is valid there.
I'm currently working on a Vertx based application that is making use of vertx-eventbus.js as described here: http://vertx.io/docs/vertx-web/java/#_sockjs_event_bus_bridge
So the client side is using vertx-eventbus.js to send a message to server side which has SockJSHandler doing the 'bridge' to receive the message. The whole mechanism has been working fine for few months. Only very recently we now potentially need to have higher timeout limit for certain messages. If run right now, those messages always cause the following warning on the server side (and also caused the eventbus to attempt to return too early on client side with "undefined"):
Message reply handler timed out as no reply was received - it will be removed
My question is then how do you increase the timeout for the message sending in this scenario? It seems to be using the default 30 seconds timeout and I have looked everywhere (online searches Google, here at Stackoverflow, etc) with no luck. I know there is a way to set timeout when you are sending using Vertx on the backend but not when sending from vertx-eventbus.js.
Any input is very much appreciated. Let me know if there is anything else you need to better understand what I'm trying to achieve.
Thank you,
Tom
How would I go around creating an auto-updating newsfeed? I was going to use NodeJS, but someone told me that wouldn't work when I got into the thousands of users. Right now, I have it so that you can post text to the newsfeed, and it will save into a mysql database. Then, whenever you load the page, it will display all the posts from that database. The problem with this is that you have to reload the page everytime there is an update. I was going to use this to tell the nodejs server someone posted an update...
index.html
function sendPost(name,cont) {
socket.emit("newPost", name, cont);
}
app.js
socket.on("newPost", function (name,cont) {
/* Adding the post to a database
* Then calling an event to say a new post was created
* and emit a new signal with the new data */
});
But that won't work for a ton of people. Does anyone have any suggestions for where I should start, the api's and/or programs I would need to use?
You're on the right track. Build a route on your Node webserver that will cause it to fetch a newspost and broadcast to all connected clients. Then, just fire the request to Node.
On the Node-to-client front, you'll need to learn how to do long polling. It's rather easy - you let a client connect and do not end the response until a message goes through to it. You handle this through event handlers (Postal.JS is worth picking up for this).
The AJAX part is straightforward. $.get("your/node/url").then(function(d) { }); works out of the box. When it comes back (either success or failure), relaunch it. Set its timeout to 60 seconds or so, and end the response on the node front the moment one event targetted it.
This is how most sites do it. The problem with websockets is that, right now, they're a bit of a black sheep due to old IE versions not supporting them. Consider long polling instead if you can afford it.
(Psst. Whoever told you that Node wouldn't work in the thousands of users is talking through their asses. If anything, Node is more adapted to large concurrency than PHP due to the fact that a connection on Node takes almost nothing to keep alive due to the event-driven nature of Node. Don't listen to naysayers.)