We found ourselves working in a webpage that manages to work with >20k users in real time.
We worked so hard to optimize the back-end system but we now need to make optimizations in the front end.
The first thing I came up with was use some tool of monitoring the load time of the JS files.
But we don't really need to get the times of loading javascript, what we really need is know what parts of our javascript code are taking more time to finish.
Now we are using new relic to track our site, and know what server-side scripts need to be optimized.
But I can't see any metrics about front-end codes or witch files need to be optimized.
There's any tool out there that may help us with this?
The best way to test your javascript speed cross browsers is to write your own function.
wrap whatever code you want to test into a function and run that function as many times as you possibly can within 1 second counting the iterations. run that test at least five times because you will get a different result every time and then find the average number of iterations. I also suggest using chrome since it is about 4 times faster than any other browser out there when it comes to javascript performance. Test all your code and optimize it for your browser and the improvement will impact your users experience as well.
I have my own speedTest obj that I wrote which you can view at my website here http://www.deckersdesign.com/source.php which is under construction. FYI this obj finds the median of iterations not the average.
Related
I have a CRA and I'm trying to work out the download speed of the index.html. how can I see this? I can see the .chunk.js file took 11seconds to download over slow 3G and it's only 250kb~, is this normal?
I want to work out how quick the html takes to load so I can then work out how long the javascript takes to kick it and monitor this whole speed process. anyone have any good tips for doing this?
currently setting sessionStorage to be Date.Now() but need to know exactly when to fire this so I'm comparing 2 accurate values
This solution won't be able to pin down the exact time taken to load index.html, but it assesses the website from a user's perspective and tells you what possible optimizations you can make. If your end goal is simply performance optimization, I recommend using lighthouse.
The screenshot below shows the assessment results and time taken for certain items to load or be perceived as visible/interactable.
They're using the Chrome DevTools in this screenshot but there's also a CLI version (which I prefer more 😛).
I need to get from thousands of online JSON about 300.000 final lines, equal to 30MB.
Being beginner in coding, I prefer to stick to JS to $getJSON data, cut it, append interesting parts to my <body>, and loop on the thousands online JSON. But I wonder :
can my web-browser handles 300.000 $getJSON queries and the resulting 30~50MB webpage without crashing ?
is it possible to use JS to write down a file with this results, so the script's works is constantly saved ?
I expect my script to run about 24 hours. Numbers are estimations.
Edit: I don't have server side knowledge, just JS.
A few things aren't right about your approach for this:
If what you are doing is fetching (and processing) data from another source then displaying it for a visitor, processing of this scale should be done separately and beforehand in a background process. Web browsers should not be used as data processors on the scale you're talking about.
If you try to display a 30-50MB webpage, your user is going to experience lots of frustrating issues - browser crashes, lack of responsiveness, timeouts, long load times, and so on. If you expect any users on older IE browsers, they might as well give up without even trying.
My recommendation is to pull this task out and do it using your backend infrastructure, saving the results in a database which can then be searched, filtered, and accessed by your user. Some options worth looking into:
Cron
Cron will allow you to run a task on a repeated and regular basis, such as daily or hourly. Use this if you want to continually update your dataset.
Worker (Heroku)
If running Heroku, take it out of the dyno and use a separate worker so as not to clog up any existing traffic on your app.
Lately, I've been playing around with Ramsey's theorem for R(5,5). You can see some examples of previous attempts here: http://zacharymaril.com/thoughts/constructionGraph.html
Essence: find all the k4's in a graph/its complement and then connect another point in such a way that no k5's are formed (I know with one type of choice, mathematically it becomes improbable that you would get past 14. But there are ways around that choice and I've gotten it to run as far as 22-23 without bricking my browser.)
With new ideas, I started playing around with storing information from batch to batch. The current construction graph goes through and searches for all the k4's in a graph every time it sees the graph. I thought this was overkill, since the k4's will stay the same in the previous graph and only new k4's could show up in the connections produced by the addition of the new point. If you store the previous k4's each time you find them and then only search in the frontier boundaries that were newly created, then you reduce the number of comparisons you have to do from (n 4) to (n-1 3).
I took a shot at implementing this last night and got it to work without obvious errors. While I am going to go back after this and comb through it for any problems, the new method makes the program much much slower. Before, the program was only ~doubling in terms of time it took to do all the comparisons. Now, it is going up in what looks to be factorial time. I've gone back through and tried to ferret out any obvious errors, but I am wondering whether the new dependence on memory could have created the whole slow down.
So, with that long intro, my main question is how are the memory and speed of a program related in a web browser like chrome? Am I slowing down the program by keeping a bunch of little graphs around as JSON objects? Should it not matter in theory how much memory I take up in terms of speed? Where can I learn more about the connection between the two? Is there a book that could explain this sort of thing better?
Thank you for any advice or answers. Sorry about the length of this: I am still buried pretty deep in the idea and its hard to explain it shortly.
Edit:
Here are the two webpages that show each algorithm,
With storage of previous finds:
http://zacharymaril.com/thoughts/constructionGraph.html
Without storage of previous find:
http://zacharymaril.com/thoughts/Expanding%20Frontier/expandingFrontier.html
They are both best viewed with Chrome. It is the browser I have been using to make this, and if you open up the dev panel with ctrl shift i and type "times", you can see a collection of all the times so far.
Memory and speed of a program are not closely interrelated.
Simple examples:
Computer with almost no ram but lots
of hard drive space is going to be
thrashing the hard drive for virtual
memory. This will slow things down
as hard drives are significantly
slower than ram.
A computer built out
of all ram is not going to do the
same thing. It won't have to go to the hard drive so will stay quicker.
Caching usually takes up a lot of
ram. It also significantly increases
the speed of an application. This is
how memcache works.
An algorithm may take a long time but
use very little ram. Think of a
program that attempts calculating PI.
It will never finish, but needs very
little ram.
In general, the less ram you use (minus caching) the better for speed because there's less chance you're going to run into constraints imposed by other processes.
If you have a program that takes considerable time to calculate items that are going to be referenced again. It makes sense to cache them memory so you don't need to recalculate them.
You can mix the two by adding timeouts to the cached items. Every time you add another item to the cache, you check the items there and remove any that haven't been accessed in a while. "A while" is determined by your need.
As we all know, there are multiple ways of going about doing the same thing. I am looking for a way to compare the efficiency of three different JavaScripts that do the same thing. I've placed all of the code in different text files just to rank the file size, but I don't think the smallest code is necessarily the most efficient. I have chrome developer tools, and firebug will these get the job done or is there a fancier way?
you can use the jsPerf.com site to test performance on different js codes.
Use dynatrace to measure the efficiency of the javascript.
http://www.dynatrace.com
One way would be to create some test data, and set up a loop to benchmark each of your Javascript programs by running the data through some number of times, say 10000 repetitions with each program. Use Javascript to time each program and show you the times for each.
I was wondering, if there is any generalities (among all the javascript engines out there) in the cost related to execute a given instruction vs another.
For instance, eval() is slower than calling a function that already has been declared.
I would like to get a table with several instructions/function calls vs an absolute cost, maybe a cost per engine.
Does such a document exists?
There's a page here (by one of our illustrious hosts, no less) that gives a breakdown by browser and by general class of instruction:
http://www.codinghorror.com/blog/archives/001023.html
The above page links to a more detailed breakdown here:
http://www.codinghorror.com/blog/files/sunspider-09-benchmark-results.txt
Neither of those pages breaks down performance to the level of individual function calls or arithmetic operations or what have you. Still, there is quite a bit of potentially useful information.
There is also a link to the benchmark itself:
http://www2.webkit.org/perf/sunspider-0.9/sunspider.html
By viewing the benchmark source you can get a better idea of what specific function calls are being tested.
It also seems like it might be a simple matter to create your own custom version of the benchmark that collects the more specific data you are interested in. You could then run the modified benchmark on a variety of browsers, perhaps even taking advantage of a service such as browsershots.org to test a wide spectrum of browsers. Not sure how well that would work, but it might be fun to try....
It is of course possible that the same operation executed in the same browser might take significantly different amounts of time depending on the context in which it's being used, in ways that might not be immediately obvious. For example, I could imagine a Javascript engine spending more time optimizing code that is executed frequently, with the result that the code executed in a tight loop might run faster than identical code executed infrequently. Of course, that might not matter much in practice. Still, I imagine that a good table of the sort you are looking for might also summarize any such effects if they turned out to be important.