We're building a latency-sensitive web application that uses websockets (or a Flash fallback) for sending messages to the server. While there is an excellent tool called Yahoo Boomerang for measuring bandwidth/latency for web apps, the latency value produced by Boomerang also includes the time necessary to establish HTTP connection, which is not what I need, since the websockets connection is already established and we actually only need to measure the ping time. Is there any way to solve it?
Second, Boomerang appears to fire only once when the page is loaded and doesn't seem to rerun the tests later even if commanded to. Is it possible to force it to run connection tests e.g. every 60 seconds?
Seems pretty trivial to me.
Send PING to the server. Time is t1.
Read PONG response. Time is t2 now.
ping time = t2 - t1
Repeat every once in a while (and optionally report to the stats server).
Obviously, your server would have to know to send PONG in response to a PING command.
Related
I am creating a question answering application using Node.js + Express for my back-end. Front-end sends the question data to the back-end, which in turn makes requests to multiple third-party APIs to get the answer data.
Problem is, some of those third-party APIs take too long to respond, since they have to do some intense processing and calculations. For that reason, i have already implemented a caching system that saves answer data for each different question. Nevertheless, that first request each time might take up to 5 minutes.
Since my back-end server waits and does not respond back to the front-end until data arrives (the connections are being kept open), it can only serve 6 requests concurrently (that's what I have found). This is unacceptable in terms of performance.
What would be a workaround to this problem? Is there a way to not "clog" the server, so it can serve more than 6 users?
Is there a design pattern, in which the servers gives an initial response, and then serves the full data?
Perhaps, something that sets the request to "sleep" and opens up space for new connections?
Your server can serve many thousands of simultaneous requests if things are coded properly and it's not CPU intensive, just waiting for network responses. This is something that node.js is particularly good at.
A single browser, however, will only send a few requests at a time (it varies by browser) to the same endpoint (queuing the others until the earlier ones finish). So, my guess is that you're trying to test this from a single browser. That's not going to test what you really want to test because the browser itself is limiting the number of simultaneous requests. node.js is particularly good at having lots of request in flight at the same time. It can easily do thousands.
But, if you really have an operation that takes up to 5 minutes, that probably won't even work for an http request from a browser because the browser will probably time out an inactive connection still waiting for a result.
I can think of a couple possible solutions:
First, you could make the first http request be to just start the process and have it return immediately with an ID. Then, the client can check every 30 seconds of so after that sending the ID in an http request and your server can respond whether it has the result yet or not for that ID. This would be a client-polling solution.
Second, you could establish a webSocket or socket.io connection from client to server. Then, send a message over that socket to start the request. Then, whenever the server finishes its work, it can just send the result directly to the client over the webSocket or socket.io connection. After receiving the response, the client can either keep the webSocket/socket.io connection open for use again in the future or it can close it.
I use setInterval to get my notification counter every 5 second, I thinks it's bad idea to getting those results. because if you stay on my site for a while you got a billion times of loading XHR loading.And if you use the facebook, you don't get lot of XHR.Here is my web site capture XHR:
My Code in file: notification.php:
function getnotificount(){
$.post('getnotificount.php', function(data) {
$('#notifi_count').html(data);
});
}
setInterval(function(){
getnotificount();
}, 5000);
Your code is ok. It is not 'loading more than 1 billion XHR request', it's starting (and finishing - as we can see) a request every X seconds and there's nothing wrong with that.
However it's not the best way to implement a push notification system. That would be websockets, which is a way for your client to 'listen' to messages from your server. There are frameworks for this, the most popular one (and the one that I recommend) being socket.io.
Your third and most advanced/modern solution would be implementing a service-worker-based notification system but I'm pretty sure that's way too complex and not suitable for you since you can't even understand your problem enough to describe it.
What you are doing is polling, which is making a new request to the server on a regular bases. For a http request, a TCP connection is opened, request / response are exchanged and the TCP connection closed. This is what happens every 5s in your case.
If you want a more light weight solution, have a look at websockets such as socket.io. Only one TCP connection is opened and maintained between the front and the back. This bidirectional conection enables the back to notify the front when something happens.
It is not a bad idea at all. It is called polling and it is used at many places as a means to get regularly data from the server. However, it is not the best, nor the most modern solution to do this. In fact, if your server supports WebSockets, then you should use them. You do not need to use NodeJS to use WebSockets, since WebSocket is a protocol of creating a duplex channel of communication between your server and the client. You should read about WebSockets. Also, you can use push notification, which is inferior to WebSockets in my opinion. And a hacky way is to use a forever frame.
I have node server working locally and I connect to this server by my webbrowser. I use websocket to communication between client and server. I want to measure accurate time of sending websocket message to the server and from the server. I use javascript function Date.now(), but the problem is that in my results sometimes message was received by server before it was sent by client. I guess that there is difference between client "clock" and server "clock". But there are on the same machine.
I will be gratefull for explanation and links for resources.
As John Resig outlined in one of his blog articles javascripts time is not accurate. The clock you can access with javascript only "ticks" every few milliseconds. That means that it can be roughly ~7ms off the accurate time. Now if your request takes shorter than this small timespan (which can happen if it stays on localhost) it could look like that it was received before it was sent.
real time: 1000ms
js time: 1000ms
travel time: 5ms
real time: 1005ms
js time: 998ms // inaccuracy
difference: -2ms
We can't do anything to improve this as performance.now() was disabled for security reasons. So the only thing we can do is to introduce network latency :), but then the clocks time is probably more off...
According to SO accepted answer, the ping timeout must be greater than the ping interval, but according to the examples in official socket.io docs, the timeout is less than the interval. Which one is correct?? Also, what could be the ideal values for both the settings for a shared whiteboard application where the moderator should not be able to edit the canvas when disconnected (internet drops off).
According to the socket.io documentation:
Among those options:
pingTimeout (Number): how many ms without a pong packet to consider the connection closed (60000)
pingInterval (Number): how many ms before sending a new ping packet (25000).
Those two parameters will impact the delay before a client knows the server is not available anymore. For example, if the underlying TCP connection is not closed properly due to a network issue, a client may have to wait up to pingTimeout + pingInterval ms before getting a disconnect event.
This leads me to believe there is no dependency on one value being greater than another. You will likely want to set a higher timeout time to allow for slow network connection to receive a response. The interval time will be the time from a failure to attempt trying again and should be set long enough to allow reconnection, but not so long you are holding the connection.
As for ideal values, this will be application specific. A few things I would consider:
How responsive must your application be?
How long will your application take to respond to a network request?
How large is the data being passed back and forth?
How many concurrent users will you have?
These are just to name a few, for a small local application you would likely be fine with a timeout of 10000 and an interval of 5000, but this is an absolute guess. You will need to consider the previously mentioned bullet points.
I have an issue - I should update information for user as soon as possible, but i don't know exact time when it'll happen.
I use setInterval function that checks differences between current state and the state before checking. If there are any differences then I send an AJAX request and update info. Is it bad? I can't (or don't know how to) listen any events in that case.
And what about interval time? All users (~300 at the same time) are from local network (ping 15-20 ms). I have to refresh information immediately. Should I better use 50ms or 500ms?
If the question is not very clear just ask - I'll try to say it in other words.
Thanks in advance
Solution: Websocket
Websockets allow client applications to respond to messages initiated from the server (compare this with HTTP where the client needs to first ask the server for data via a request). A good solution would be to utilize a websocket library or framework. On the client you'll need to create a websocket connection with the server, and on the server you'll need to alert any open websockets whenever an update occurs.
The issue with interval
It doesn't scale, you could set the interval to 4000 miliseconds and still once you hit 1000 users...you are going to be slamming your server with 10000 requests and responses a minute...This will use tons of data and use processing to return nothing. Websockets will only send data to the client agent only when the event you want to send actually occurs.
Backend: PHP
Frameworks
Ratchet
Ratchet SourceCode
phpwebsocket
PHP-Websockets-Server
Simply implement one of the above frameworks as a websocket connection then you will register as a client to this endpoint and it will send data on whatever event you define.