Multiple setInterval with long periods - javascript

In a server running Nodejs, I am using multiple setInterval functions, each having a relatively very long interval (24 hours, 36 hours, etc.).
setInterval(funcOne, 86400000)
...
setInterval(funcTwo, 172800000)
While they seem to work fine for now, I am a bit concerned about their reliability and performance. The server of coarse is running all the time. I am okay with restarting the functions if the server failed.
I tried to investigate about their interaction with the event loop, but it is still a bit vague for me.
I want to make sure that these multiple functions are not blocking the event loop no matter how many they were.
I am looking for possible failures or crashes due to them.
I am keen to understand any performance overhead they might cause, if any.
PS: I am not willing to migrate those functionalities from the server to a cloud function or a serverless app at the moment.

Related

Run jobs on FCFS basis in Nodejs from a database

I am developing a NodeJS application wherein a user can schedule a job (CPU intensive) to be run. I am keeping the event loop free and want to run the job in a separate process. When the user submits the job, I make an entry in the database (PostgreSQL), with the timestamp along with some other information. The processes should be run in the FCFS order. Upon some research on stackoverflow, I found people suggesting Bulljs (with Redis), Kue, RabbitMQ, etc. as a solution. My doubt is why do I need to use those when I can just poll the database and get the oldest job. I don't intend to poll the db at a regular interval but instead only when the current job is done executing.
My application does not receive too many simultaneous requests. And also users do not wait for the job to be completed. Instead they logout and are notified through mail when the job is done. What can be the potential drawbacks of using child_process (spawn/exec) module as a solution?
My doubt is why do I need to use those when I can just poll the database and get the oldest job.
How are you planning on handling failures? What if Node.js crashes with a job mid-progress, would that effect your users? Would you then retry a failed job? How do you support back-off? How many attempts before it should completely stop?
These questions are answered in the Bull implementation, RabbitMQ and almost every solution you'll find for your current challenge.
From what I noticed (child_process), it's a lower level implementation (low-level in Node.js), meaning that a lot of the functionality you'll typically require (failover/backoff) isn't included. You'll have to implement this.
That's where it usually becomes more trouble than it's worth, although admittedly managing, monitoring and deploying a Redis server may not be the most optimal solution either.
Have you considered a different approach, how would a periodic CRON job work? (For example).
The challenge with such a system is usually how you plan to handle failure and what impact failure has on your application and end-users.
I will say, in the defense of Bull, for a CPU intensive task I prefer to have a separated instance of the worker process, I can then re-deploy that single process as many times as I need. This keeps my back-end code separated and generally easier to manage, whilst also giving me the ability to easily scale up/down when required.
EDIT: I mention "more trouble than it's worth", if you're looking to really learn how technology like this is developed, go with child process and build your own abstractions on-top, if it's something you need today, use Bull, RabbitMQ or any purpose-built alternative.

How to profile nodejs application during specific time in running?

I've got a Node application which listens to a websocket data feed and acts on it by talking to another API. I'm now running into performance problems. Most of the time things are quiet, with the CPU at about 2-5%, but sometimes (about 3 times per 24 hours) the websocket feed we receive suddenly goes wild for a couple minutes with a lot of data. This makes the application do a lot of calculations, causing the CPU to spike to 100%, and causing all sorts of other trouble. I cannot predict these busy times and I can't really replicate it in a testing setup either. For these reasons I'm having a hard time profiling these spikes.
I'm not a Node guru, but I tried profiling this application using the node --prof flag, followed by the --prof-process flag (on a 3GB isolate-0x321c640-v8.log file). This is ok-ish, but the problem is that if I do that I profile the whole time it ran, instead of the high-traffic part of the time it ran.
I checked out the isolate-0x321c640-v8.log file (see excerpt below) hoping for some sort of timestamp on every line so that I could isolate the time I'm interested in, but I can't find anything like that in there.
tick,0x8ad1f58c24,26726463388,0,0x3fedc8b5859026ea,0,0x8ad76332f8,0x8ad7619f68,0x84113fbe10b,0x8ad12fd54f,0x8ad734f837,0x8ad735192b,0x8ad59c2598,0x8ad59c9765
tick,0x8ad1f6d472,26726464443,0,0x3ff76afe21366278,0,0x8ad7633873,0x8ad7619f68,0x84113fbe10b,0x8ad12fd54f,0x8ad734f837,0x8ad735192b,0x8ad59c2598,0x8ad59c9765
tick,0x8ad1206bd5,26726465499,0,0x8ad1f58c40,0,0x8ad76332f8,0x8ad7619f68,0x84113fbe10b,0x8ad12fd54f,0x8ad734f837,0x8ad735192b,0x8ad59c2598,0x8ad59c9765
tick,0x8ad1f6d472,26726466552,0,0x400040d9bba74cfb,0,0x8ad763377d,0x8ad7619f68,0x84113fbe10b,0x8ad12fd54f,0x8ad734f837,0x8ad735192b,0x8ad59c2598,0x8ad59c9765
tick,0x8ad1f591fa,26726467615,0,0x3fe94cccccccccce,0,0x8ad7626638,0x8ad761c1d9,0x84113fbe10b,0x8ad12fd54f,0x8ad734f837,0x8ad735192b,0x8ad59c2598,0x8ad59c9765
tick,0x8ad1f6d472,26726468680,0,0x7ffcc894f270,0,0x8ad1f59054,0x8ad7626638,0x8ad761c1d9,0x84113fbe10b,0x8ad12fd54f,0x8ad734f837,0x8ad735192b,0x8ad59c2598,0x8ad59c9765
tick,0x8ad1f6d41c,26726469744,0,0x329ab68,0,0x8ad7626cc9,0x8ad761c1d9,0x84113fbe10b,0x8ad12fd54f,0x8ad734f837,0x8ad735192b,0x8ad59c2598,0x8ad59c9765
Is there a good way to profile these specific times during runtime?
An easy answer for your problem I can't suggest but I prepared the following roadmap to learn more on the node.js tools and techniques for profiling and optimization that can be helpful for you.
Netflix JavaScript Talks - Debugging Node.js in Production - They have an interesting ideea of storing a complete processor and memory dump in the moment of crushing, this one can be useful for you I think.
0x - a flame graph profiling tool very helpful understanding how the cpu behaves:
A little article on how to use 0x Squeeze node.js performance with flame graphs
clinic - a super interesting tool of profiling and performance auditing of your node.js application based on async-hooks
I would instead of hunting down and hoping to "catch" the time frame the issue is occurring to try to simulate it via stress testing your app.
First setup the tools you need to monitor what is happening. Anything you find readable and helpful should be used here. For example you can use 0x, v8-profiler, heap-profile etc. The key here is to try those and see what is the most readable and easy to follow since some of them have so much data that you get lost in it.
Then get a sample of the data coming in and use libs like Artilery etc to make tons of requests and see what gives. I mean between the flame graphs of 0x and all the data from the other debugging libs you should have enough information to reach a conclusion.
If simulating this is not possible then you can simply run any of the profiling libs on a setInterval and write to the disk the data. For example heap-profile is used with this approach:
const heapProfile = require('heap-profile');
heapProfile.start();
// Write a snapshot to disk every hour
setInterval(() => {
heapProfile.write((err, filename) => {
console.log(`heapProfile.write. err: ${err} filename: ${filename}`);
});
}, 60 * 60 * 1000).unref();
So every hour you would have a data file to look into.
If your node crashes then you could write the dump in the process.catch:
process.on('uncaughtException', function (err) {
heapProfile.write(...)
});
Hope this helps.
Note: process.on('uncaughtException' ... is still under debate as if it is the correct approach to handle the crashes of node from my understanding vs the use of domains.
Have you thought about integrating a 3rd party Application Performance Management (APM) like NewRelic, AppDynamics , Atatus or Keymetrics?
They all have their strengths and weaknesses and different pricing plans. But they all share the goal to help developers to better understand their applications to pin down problems like yours.
Some of them have a free trial. That might be a good start to see if it fits your needs or not.

Long-polling Slowing Down Web App

This is more of a concept question as I am trying to learn more about long-polling, in Javascript/jQuery specifically. I have a web app where I am long-polling (websockets are not an option right now) to a json file. I have run some tests and after leaving the app open for some time, it starts to slow down and later seems to start getting stuck. Using Chrome, I've checked the developer tools and the memory starts going through the roof, as with the listeners (>5000) around 1 1/2 hours of up time. I've searched and searched but can't find forums that pinpoint this problem and solution. In my code, I am using setInterval every 30 seconds. What do I need to do in order to keep the memory and listeners count low, and make sure the app does not overload and get slow? The function of the app requires it to stay up for long periods of time.
Thank you for your help.

How to get metrics of javascript/dom loading and processing time

We found ourselves working in a webpage that manages to work with >20k users in real time.
We worked so hard to optimize the back-end system but we now need to make optimizations in the front end.
The first thing I came up with was use some tool of monitoring the load time of the JS files.
But we don't really need to get the times of loading javascript, what we really need is know what parts of our javascript code are taking more time to finish.
Now we are using new relic to track our site, and know what server-side scripts need to be optimized.
But I can't see any metrics about front-end codes or witch files need to be optimized.
There's any tool out there that may help us with this?
The best way to test your javascript speed cross browsers is to write your own function.
wrap whatever code you want to test into a function and run that function as many times as you possibly can within 1 second counting the iterations. run that test at least five times because you will get a different result every time and then find the average number of iterations. I also suggest using chrome since it is about 4 times faster than any other browser out there when it comes to javascript performance. Test all your code and optimize it for your browser and the improvement will impact your users experience as well.
I have my own speedTest obj that I wrote which you can view at my website here http://www.deckersdesign.com/source.php which is under construction. FYI this obj finds the median of iterations not the average.

At what point do you perform front-end optimization?

I'm talking about things like page/stylesheet caching, minifying javascript, etc.
Half of me thinks it's better to do these things as early as possible while still in development so I can be consciously aware of more realistic speed and response issues as well as interacting with something that more closely resembles what will be deployed into production, but the other half of my brain thinks it makes more sense to not do anything until just before launch so that I'm constantly working with the raw data that has not been optimized while in development.
Is there common or conventional wisdom on this subject?
I do all optimizations at the end. That way I know when something doesn't work it is because the code is wrong. I've tried to optimize things too early at times, and realized that I wasted an hour because I was caching something etc.
Realize that a user spents most of his time waiting on frontend objects to (down)load. Your application may generate html in 0.1 second but the user spends at least 2 seconds waiting on all the images etc to load. Getting these download times to a small number will positively increase the user experience.
A lot can be done already by enabling GZIP and using minified javascript libraries. You should download and install YSlow and configure your webserver with appropriate caching headers. This alone can save hundreds of miliseconds loading time.
The last step is to optimize the amount of images using CSS sprites. Other steps can include minimizing css and javascript, but this will gain the least of all methods I mentioned already.
To summarize, most of this can be done by properly configuring your webserver, the sprites however should be done during development.
I'm a fan of building the site first, then using a user experience profiler like YSlow to do the optimizations at the very end.
http://developer.yahoo.com/yslow/
I should add that a profiler of some sort is essential. Otherwise, you're optimizing without before/after data, which is not exactly scientific (not to mention you won't be able to quantify how much improvement you made).
Premature optimization is the root of all evil :)
Especially early in development and even more so when the optimizations will interfere with your ability to debug the code or understand the flow of the program.
That said, it is important to at least plan for certain optimizations during the design phase so you don't code yourself into a corner where those optimizations are no longer easy to implement (certain kinds of internal caching being a good example).
I agree with your premise that you should do as much optimization in the early stages as you can. It will not only improve development time (think: saving 1/2 seconds per refresh adds up when you're spamming control+r!) but it will keep your focus in the end -- during the time when you're refacting the actual code you've implemented -- on the important stuff. Minify everything that you won't be modifying right off of the bat.
I agree with the other half of your brain that says 'do what you have to do, and then do it faster'. But for anything you do you must know and keep in mind how it can be done faster.
The main problem I see with this approach is that at the end its easier to ignore the fact that you have to optimise, especially if everything seems to be 'good enough'. The other problem is that if you are not experienced with optimisation issues you may indeed 'code your self in a corner' and that's where things tend to get really ugly.
I think this might be one where it's difficult to get a clear answer, as different projects will have different requirements, depending on how much work they are doing on the client side.
Mine rule of thumb would be probably later rather than sooner. Only because a lot of the typical front-end optimisation techniques (at least the stuff I'm aware of) tend to be fairly easy to implement. I'm thinking of whitespace stripping and changing http headers and so forth here. So I would favour focusing on work that directly answers the problem your project is tackling; once that is being addressed in an effective way move onto stuff like optimising front-end response times.
After coding for a number of years, you get to have a pretty clear idea what the performance bottle necks will be.
DB Queries
JS/Jquery functions
cfc's
images
javascripts
By identifying the bottlenecks ahead of time, while you create/modify each piece of the application that deals with those bottlenecks, you can spend time tweaking, knowing that by spending time while coding, gives you better performance in the end.
Also tends to make us less lazy, by always creating optimal code, and learning what that is in real life.

Categories