Node.js to Socket.io Time Delay - javascript

I am working in real time trading application using Node.js(v0.12.4) and Socket.io(1.3.2). In that, I am facing some time delay nearly (100ms) when the response emitting from Node.js to GUI(Socket.Io).
I don't have a clue why the time delay is there while emitting data from Node.js to GUI (Socket.IO).
This happening in Production Site. And we tried to debug this in production server location also because of network latency. But same result.
Please anyone help me on this?

One huge thing to note before doing the following.
When calculating timing from back-end(server side) to front end
(client side) you need to run this on the same computer that uses the same
timing crystal.
quartz crystal-driven timing even on high quality motherboards deffer
from one another.
If you find no delay when calculating time delay from back-end(server side) to front end (client side) on the same pc then the delay you originally found was caused by either the network connection or the deference in the motherboards timing crystals. Which would eliminate Node.js and Socket.io as the cause of the time delay.
Basically you need to find out where the delay is happening before you can solve the problem.
What you need to do is find out what is causing the largest performance hit in your project. In order to do this you will need to isolate the time each process takes. You also need to measure the time delay from the initial retrieval of the data to the release of the data. Then measure the time delay of each function, method and process. Try to isolate the problem. You need to ask what is taking the most time to do?
In order to find out where your performance hit is coming from you need to do the following.
Find out how long it takes for your code to get the information it needs.
Find out how long each information manipulation process takes before it sends out the data i.e function/methods...
After the information is sent out find how long it takes to get the information to the client side and load.
Add all of the times up and make sure it is equal to your performance delay in order to insure you have all the data you require to isolate the performance leak.
Order every method, function, process… by its time delay most time consuming to least time consuming. When you find what processes are causing the largest delays you will then be able to fix the problem... Or at least explore tangible solutions...
Here are some tools you can use to measure the performance throughout your code.
First: Chrome has a really good tool that lets you see the performance of each piece of executed code.
https://developers.google.com/web/tools/chrome-devtools/profile/evaluate-performance/timeline-tool?hl=en
https://developer.chrome.com/devtools/docs/timeline
Second: performance-now node package. You can use it for dev performance testing.
Third: Stack overflow post on measuring time/performance in js.
Fourth: you can use things like console.time()
https://nodejs.org/api/console.html
https://developer.mozilla.org/en-US/docs/Web/API/Console/time
https://developer.chrome.com/devtools/docs/console-api

I have found out that time delay where it is happening.
Once, I have emitted the data from Client Socket to Node, I will show the some alert message ("Data Processed"). The alert message taking time to render in GUI.
This alert message blocking the response data from the Node to Socket.
Thanks for your help Guys.

Related

JavaScript: trying to create a well-behaved background job, but it gets too little time to run while rest of the system is mostly idle?

In a browser, I am trying to make a well-behaved background job like this:
function run() {
var system = new System();
setInterval(function() { system.step(); }, 0);
}
It doesn't matter what that System object is or what the step function does [except it needs to interact with the UI, in my case, update a canvas to run Conway's Game of Life in the background], the activity is performed slowly and I want it to run faster. But I already specified no wait time in the setInterval, and yet, when I check the profiling tool in Chrome it tells me the whole thing is 80% idle:
Is there a way to make it do less idle time and perform my job more quickly on a best effort basis? Or do I have to make my own infinite loop and then somehow yield back time to the event loop on a regular basis?
UPDATE: It was proposed to use requestIdleCallback, and doing that makes it actually worse. The activity is noticably slower, even if the profiling data isn't very obvious about it, but indeed the idle time has increased:
UPDATE: It was then proposed to use requestAnimationFrame, and I find that once again the slowness and idleness is the same as the requestIdleCallback method, and both run at about half the speed that I get from the standard setInterval.
PS: I have updated all the timings to be comparable, all three now timing about 10 seconds of the same code running. I had the suspicion that perhaps the recursive re-scheduling might be the cause for the greater slowness, but I ruled that out, as the recursive setTimeout call is about the same speed as the setInterval method, and both are about twice as fast as these new request*Callback methods.
I did find a viable solution for what I'm doing in practice, and I will provide my own answer later, but will wait for a moment longer.
OK, unless somebody comes with another answer this here would be my FINAL UPDATE: I have once again measured all 4 options and measured the elapsed time to complete a reasonable chunk of work. The results are here:
setTimeout - 31.056 s
setInterval - 23.424 s
requestIdleCallback - 68.149 s
requestAnimationFrame - 68.177 s
Which provides objective data to my impression above that the two new methods with request* will perform worse.
I also have my own practical solution which allows me to complete the same amount of work in 55 ms (0.055 s), i.e., > 500 times faster, and still be relatively well behaved. Will report on that in a while. But wonder what anybody else can figure out here?
I think this is really dependent on what exactly you are trying to achieve though.
For example, you could initialize your web-worker on loading the page and make it run the background-job, if need be, then communicate the progress or status of the job to the main thread of your browser. If you don't like the use of post-message for communication between the threads, consider user Comlink
Web worker
Comlink
However, if the background job you intend to do isn't something worth a web-worker. You could use the requestIdleCallback API. I think it fits perfectly with what you mentioned here since you can already make it recursive. You would not need a timer anymore and the browser can help you schedule the task in such a way that it doesn't affect the rendering of your page (by keeping everything with 60fps).
Something like =>
function run() {
// whatever you want to keep doing
requestIdleCallback(run)
}
You can read more about requestIdleCallback on MDN.
OK, I really am not trying to prevent others to get the bounty, but as you can see from the details I added to my question, none of these methods allow high rate execution of the callback.
In principle the setInterval is the most efficient way to do it, as we already do not need to re-schedule the next call back all the time. But it is a small difference only. Notably requestIdleCallback and requestAnimationFrame are the worst when you want to be rapidly called back.
So, what needs to be done is instead of executing only a tiny amount of work and then expect to be called back quickly, we need to batch up more work. Problem is we don't know exactly how much work we should batch up before it is too much. That can probably in most cases be figured out with trial and error.
Dynamically one might take timing probes to find out how quickly we are being called back again and preemptively exit the work (loop of some kind) when the time between the call-backs is expired.

Performance in Websockets blob decoding causing memory problems

I have a performance problem in Javascript causing a crash lately at work. With the objective modernising our applications, we are looking into running our applications as webservers, onto which our client would connect via a browser (chrome, firefox, ...), and having all our interfaces running as HTML+JS webpages.
To give you an overview of our performance needs, our application run image processing from camera sources, running in some cases at more than 20 fps, but in the most case around 2-3fps max.
Basically, we have a Webserver written in C++, which HTTP requests, and provides the user with the HTML pages of the interface and the corresponding JS scripts of the application.
In order to simplify the communication between the two applications, I then open a web socket between the webpage and the c++ server to send formatted messages back and forth. These messages can be pretty big, up to several Mos.
It all works pretty well as long as the FPS stays relatively low. When the fps increases the following two things happen.
Either the c++ webserver memory footprint increases pretty fast and crashes when no more memory is available. After investigation, this happens when the network usage full, and the websocket cache fills up. I think this is due to the websocket TCP-IP way of doing stuff, as the socket must wait for the message to be sent and received to send the next one.
Or the browser crashes after a while, showing the Aw snap screen (see figure below). It seems in that case that the same thing more or less happen but it seems this time due to the garbage collection strategy. The other figure below shows the printscreen of the memory usage when the application is running, clearly showing saw pattern. It seems to indicate that garbage collection is doing its work at intervals that are further and further away.
I have trapped the problem down to very big messages (>100Ko) being sent at fast rate per second. And the bigger the message, the faster it happens. In order to use the message I receive, I start a web worker, pass the blob i received to the web worker, the webworker uses a FileReaderSync to convert the message as an ArrayBuffer, and passes it back to the main thread. I expect this to have quite a lot of copies under the hood, but I am not so well versed in JS yet so to be sure of this statement. Also, I initialy did the same thing without the webworker (FileReader), but the framerate and CPU usage were really bad...
Here is the code I call to decode the messages:
function OnDataMessage(msg)
{
var webworkerDataMessage = new Worker('/js/EDXLib/MessageDecoderEvent.js'); // please no comments about this, it's actually a bit nicer on the CPU than reusing the same worker :-)
webworkerDataMessage.onmessage = MessageFileReaderOnLoadComManagerCBack;
webworkerDataMessage.onerror=ErrorHandler;
webworkerDataMessage.postMessage(msg.data);
}
function MessageFileReaderOnLoadComManagerCBack(e)
{
comManager.OnDataMessageReceived(e.data);
}
and the webworker code:
function DecodeMessage(msg)
{
var retMsg = new FileReaderSync().readAsArrayBuffer(msg);
postMessage(retMsg);
}
function receiveDecodingRequest(e)
{
DecodeMessage(e.data);
}
addEventListener("message", receiveDecodingRequest, true);
My question are the following:
Is there a way to make the GC not have to collect so much memory, by for instance telling some of the parts I use to reuse buffers instead of recreating them, or keeping the GC work intervals fixed ? This is something I know how to do in C++, but in JS ?
Is there another method I should use for my big payloads? Keep in mind that the transmission should be as fast as possible.
Is there another method for reading blob data as arraybuffers that would faster than what I did?
I thank you in advance for you help/comments.
As it turns out, the memory problem was due to the new WebWorker line and the new FileReaderSync line in the WebWorker.
Removing these greatly improved the performances!
Also, it turns out that this decoding operation is not necessary if I want to use the websocket as array buffer. I just need to set the binaryType attribute of websockets to "arraybuffer"...
So all in all, a very simple solution to a pain in the *** problem :-)

Is it possible to measure rendering time in webgl using gl.finish()?

I am trying to measure the time it takes for an image in webgl to load.
I was thinking about using gl.finish() to get a timestamp before and after the image has loaded and subtracting the two to get an accurate measurement, however I couldn't find a good example for this kind of usage.
Is this sort of thing possible, and if so can someone provide a sample code?
It is now possible to time WebGL2 executions with the EXT_disjoint_timer_query_webgl2 extension.
const ext = gl.getExtension('EXT_disjoint_timer_query_webgl2');
const query = gl.createQuery();
gl.beginQuery(ext.TIME_ELAPSED_EXT, query);
/* gl.draw*, etc */
gl.endQuery(ext.TIME_ELAPSED_EXT);
Then sometime later, you can get the elapsed time for your query:
const available = this.gl.getQueryParameter(query, this.gl.QUERY_RESULT_AVAILABLE);
if (available) {
const elapsedNanos = gl.getQueryParameter(query, gl.QUERY_RESULT);
}
A couple things to be aware of:
only one timing query may be in progress at once.
results may become available asynchronously. If you have more than one call to time per frame, you may consider using a query pool.
No it is not.
In fact in Chrome gl.finish is just a gl.flush. See the code and search for "::finish".
Because Chrome is multi-process and actually implements security in depth the actual GL calls are issued in another process from your JavaScript so even if Chrome did call gl.finish it would happen in another process and from the POV of JavaScript would not be accurate for timing in any way shape or form. Firefox is apparently in the process of doing something similar for similar reasons.
Even outside of Chrome, every driver handles gl.finish differently. Using gl.finish for timing is not useful information because it's not representative of actual speed since it includes stalling the GPU pipeline. In other words, timing with gl.finish includes lots of overhead that wouldn't happen in real use and so is not an accurate measurement of how fast something would execute normal circumstances.
There are GL extensions on some GPUs to get timing info. Unfortunately they (a) are not available in WebGL and (b) will not likely ever be as they are not portable as they can't really work on tiled GPUs like those found in many mobile phones.
Instead of asking how to time GL calls what specifically are you trying to achieve by timing them? Maybe people can suggest a solution to that.
Being client based, WebGL event timings depend on the current loading of the client machine (CPU loading), GPU loading, and the implementation of the client itself. One way to get a very rough estimate, is to measure the round-trip latency from server to client using a XmlHttpRequest (http://en.wikipedia.org/wiki/XMLHttpRequest). By finding the delay from server measured time to local time, a possible measure of loading can be obtained.

Disabling script max-execution-time in flex?

How do I completely disable the max-execution-time for scripts in flex? The configurable max is 60 seconds, but I'm calling off to other interactive processes which will probably run much longer than that. Is there an easy way to disable the maximum script execution time across my entire application?
you can't. and probably, that's quite good. of course it's a pitty that you can't but when looking at the kind of things some people fabricate with the flash player, I am very happy.
For simplicity Adobe decided to promote a single threaded execution model that allows concurrent operations through asynchronous callbacks. sometimes this becomes anoying, verbous and even slower (performing a big calculation in a green thread simply takes longer than doing it directly). It's more of a political choice, so I guess the best you can do is live with it.
or you could explain what exactly you're up to, so I could propose a solution.
p.s.: there has been quite a lot of discussion going on about threads for background calculation. also, some people use seperate SWFs to perform calculation, or push it to pixel bender. also, you may wanna have a look at alchemy. it supports threading through relatively efficient continuation passing.
I have a long-running SOAP request that times-out with Error 1502. "Error #1502: A script has executed for longer than the default timeout period of 15 seconds."
I went to the right-click Properties dialog on the project in Flash Builder 4, then the Flex Compiler Options.
I set the Flex Compiler Options to "-locale en_US -default-script-limits 1000 60".
The locale was already there. It was the -default-script-limits that was cryptic to decipher from the compiler reference.
But I still got the fault with Error 1502 and 15 seconds. I even did a Project->Clean... command and tried again.
So, where is that 15 second timeout set? It turns out -- from some Googling and I'm not entirely sure -- that the Flex compiler accepts my setting, but the timeout message is fixed text with the 15 seconds message.
I also found that I could try: -default-script-limits 1000 65535. That didn't help either. This is from a posting on FlashDevelop.org 1
The bottom line for me is that I now need to page or otherwise divide up the information I am requesting in the SOAP call. My code still works fine for small requests.

Browser timing out JavaScript recursive function, how to solve?

I had to develop a newsletter manager with JS + PHP + MYSQL and I would like to know a few things on browser timing out the JS functions. If I'm running a recursive function that delays a call to itself (while PHP returns a list of email), how can I be sure that the browser won't timeout this JS function ?
I'm asking this, because I remember using a similar newsletter manager, that while doing the ajax requests, after a few calls, it stopped without any apparent reason. I know JS is not meant for this, and I should use Crontab on server, but, I can't assume the users server handles cron, so I had to stick with JS + php.
PS - This didn't happened on this app yet, I'm just trying to prevent the worse of the scenarios (since I've tested a newsletter manager, that worked the same as this one I'm developing). Since my dummy email list is small and the delays between calls are also small, this works just fine, but let's imagine a 1,000 contact list, with a delay between sends of 120 seconds: Sending 30 emails for each 2 minutes.
By the way, why this ? Well, many hosting servers has a limit on emails sent per day or hour and this helps preventing violating that policy.
from the mootools standpoint, there are several possible solutions here.
request.periodical - http://mootools.net/docs/more/Request/Request.Periodical
has plenty of options that allow for handling batches of jobs, look at it like a more complex .periodical (setInterval) that understands async nature of the result and can compensate for lag etc. I think it can literally do what you set in your requirements out of the box, all you need is an oncomplete callback that clears up the done from your pending array (for eg).
request.queue - http://mootools.net/docs/more/Request/Request.Queue
basically, setup all your requests to handle the chunks of data and pass them on to Request.Queue to handle sequentially. Probably less sophisticated from the point of view of sending rate control.
How about a meta refresh. That will not cause a timeout in your javascript function. You Just reload your page after a specific time and then send the next emails out. Adding a parameter to the URL you can find out which "round" you are on.
Can this do the job for you?
You need to use setTimeOut. The code needs to yield control to the UI thread and let the browser become responsive to avoid the script from being stopped.
Read this post by Nick Z.
http://www.nczonline.net/blog/2009/01/13/speed-up-your-javascript-part-1/
There is also something the W3C Spec about this called "Efficient Script Yielding" I'm not sure how far along it is or if any browsers support it.
https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/setImmediate/Overview.html
You could also try HTML5 Web Workers.

Categories