sorting javascript gets hang - javascript

I am rendering around 3000 records,
i am using this sorting open source script ,
When i click the column, my browser getting hang very shortly ,
i cant able to continue,
Is there any solution for this prob.
link text

Update 2
I sent my updates to the original author of the above code, but until he decides to post it, here is my updated version. It speeds up the use of the standard built-in sort() if you decide to do that. It also replaces the stable cocktail sort with a stable merge sort. The merge sort is nearly as fast as using sort() in my tests. I hope that helps.
Update
I no longer think that there is a great discrepancy between browsers as far as the built-in sort() function is concerned. While IE8, for instance, is much slower overall than say Chrome, I don't think that it has to do with just the sorting function. I did some profiling in IE8 using some random data. I found that the original code can be improved quite substantially when the column data is numeric or a date. Putting the regexp searches in the comparison functions for these data types slows things down a lot because they are being every time a comparison is done between elements which for 3000 elements is around 60,000 comparisons. The number of regexp's is twice that. By doing all of this before we start sorting, we do 3,000 regexp's rather than 120,000. This can be about a 50% savings in time. I'll submit my changes to the sorttable code in a little bit.
Other than that, most of the time is reordering the DOM elements around, not sorting (unless you are using the shaker sort). If you can find a faster way to do that then you can save some time there, but I don't know of any way to do that.
Original answer:
The problem here may have to do with the actual sort. If you uncommented some of the code there (and commented out some other code), then your code is using a shaker sort to get a stable sort. Shaker sort is essentially a bidirectional bubble sort. Bubble sorts are very slow, O(N^2). If you didn't uncomment that code, then it is using javascript's built-in sort() function with various comparator functions. The problem with this is that this sort() function is implemented differently in different browsers so you might want to see if this problem happens in some browsers and not in others. Apparently, the Webkit code still uses a selection, or min, sort which is O(N^2). That almost makes me want to cry. What browser have you been using to test this?
If the sort function turns out to be the problem, then you might try changing the above code to sue a merge sort or quicksort which are both O(N log N). Quicksorts are a little bit more tricky to avoid O(N^2) cases so you might want to stick with merge sort. Also, merge sort is a stable sort. This page has an example to get you started with merge sort.

You have answered your own question in a way.
Take a look at the sorting script.
Sorting 3000 records, rearranging the DOM and rendering the output.
It will for sure take time.
This script you are using is meant for small sets of records.
Suggestion: Using server side sorting and render the results in pages, each page containing say 50 records. For about 3000 records, you will have around 60 pages.
Lets say you are on the 45th page. Then you fire a SQL query to sort (asc/desc) and skip the first 44*50 records and retrieve the next 50 records (for 45th page).

This library seems to use DOM manipulations to sort.
It would be better to generate the table each time and inject it using innerHTML.
In today's javascript engines this looks instantaneous.
Even IE6 is good at that.
That being said... showing 3000 lines to a human is something questionable.But this is another debate ;)

Related

Javascript sort function to place element in the middle of an array

Quite a few months ago, I tried to find a way to place an element in the middle of an array of 3 items using the javascript sort function. It looked like there was no clever way of achieving that result.
However, I recently found a working solution to this problem.
I then tried to generalize this solution to a bigger array of (5, 7, 9, ...) items. But, it seems to break when the size of the array exceeds 10 items.
Since I found the solution through trial and error rather than pure thinking, I hope someone is able to explain to me why everything seems to work fine until this seemingly arbitrary number.
Here is Codepen demo I created to illustrate the issue : https://codepen.io/TMLN/pen/PBoKgR?editors=0011
You can edit the line ReactDOM.render(<Example items={example1} />... and replace the prop example1 by example2, example3, ... You'll see that everything starts breaking from example5 but I really can't figure out why...
PS : this works on Chrome but for those using Edge, you might encounter a bug due to the way the browser handles the sort function.
Your sort callback function returns 0 when neither of the compared values is equal to the value to be moved. You probably intended to indicate with this zero that the relative position of a and b (the two compared values) should not change, but actually the zero means that the sort implementation should consider the two values equal, and so it is actually free to swap their positions. Only when the implementation is (what is called) stable can you be sure that the relative position of the two values will not change.
Others have found that in Chrome the sort function is not stable, which explains the erratic behaviour you notice. See this Q&A for the situation, with regard to a stable implementation, in different browsers.
To make it work on all JavaScript engines you should change your code, and return a non-zero value for all pairs you compare, based on the index they currently occupy.
Final note: using sort for this task is a bad choice: a good sorting algorithm has O(nlogn) time complexity, but if you need to find indexes of values inside the sort callback you actually add a factor n to that, making it O(n²logn). To get an element out of an array and injecting it elsewhere is an operation that can be done in O(n) time. You can use Array#splice (twice) to do just that.

Performance of searching through a list of items in javascript

I have a list of approx. 2000 questions and I am trying to create an interface where you can filter through them all using a text input.
I tried going through this React tutorial since i thought it would perform well enough but there is a considerable lag. Or at least there is when I run the code in an Electron container (perhaps I'd get better performance compiling it with Webpack). I just tried putting my code in to a jsfiddle and with 3000 elements the performance starts to suffer.
Is it futile trying to search through this many objects with html and js or is there a simpler way with better performance?
So the lag is not because of the filtering, but because you are trying to render too many objects in one hit. You can see this by typing a sequence of zeros into the filter input. Each zero typed requires less time, as obviously the result size gets smaller and smaller.
I have updated your fiddle here to show the performance if you only render the first 100 items in the result set (even though all 3000 are processed on each input change).
Essentially I am just generating the full rows variable, and then using .slice(0, 100) to generate a truncated version before rendering.
What you should do in this situation is think about UI/UX, and that it really isn't necessary to render thousands of items at the same time. You could implement some sort of pagination or infinite scroll, etc.

Cleanup of leap motion controller data

I noticed that the data I'm getting from the leap motion controller is quite noisy. Apart from the obvious (i.e. position of the fingers), I've encoutered events such as
fingers moving between hands,
"phantom" hands appearing,
fingers disappearing and reappearing immediately afterwards.
Does the API (in particular the Javascript API) provide any means of cleaning this data or is there any other way of making this data less noisy? All of these events can be handled in user code of course, but it seems that having to do this yourself every time would be less than ideal.
In short, no- at the moment the developers have to implement the logic for that. Be aware that this might not be true in the future, the API changes fast.
I had problems with this as well, I solved this by using a circular queue with a max limit of (for example) 100 frames. Then I would track the data for just one pointable. I would then filter the data for the conditions I considered to be not normal. For example width, which is very unreliable. I would get the modal value, and accept a +2 -2 range for the modal value. I would ignore everything else. Works rather well :)
In short, as you already mentioned, you need to collect data, and filter out the noise. Tool and width precision will change they told me. Do a search on the forum for isTool and see how others found ways to get 'stabilized' data.
For me the solution was (for what I wanted, which was to track one pointable, and a reliable width):
Hold a queue of max X items
Set a tolerance limit
Compare the data in the queue
Filter out what was considered noise

In jQuery is it more efficient to use filter(), or just do so in the each()?

I currently have code that is pulling in data via jQuery and then displaying it using the each method.
However, I was running into an issue with sorting, so I looked into using, and added, jQuery's filter method before the sort (which makes sense).
I'm now looking at removing the sort, and am wondering if I should leave the filter call as-is, or move it back into the each.
The examples in the jQuery API documentation for filter stick with styling results, not with the output of textual content (specifically, not using each()).
The documentation currently states that "[t]he supplied selector is tested against each element [...]," which makes me believe that doing a filter and an each would result in non-filtered elements being looped through twice, versus only once if the check was made solely in the each loop.
Am I correct in believing that is more efficient?
EDIT: Dummy example.
So this:
// data is XML content
data = data.filter(function (a) {
return ($(this).attr('display') == "true");
});
data.each(function () {
// do stuff here to output to the page
});
Versus this:
// data is XML content
data.each(function () {
if ($(this).attr('display') == "true") {
// do stuff here to output to the page
}
});
Exactly as you said:
The documentation currently states
that "the supplied selector is
tested against each element [...]",
which makes me believe that doing a
filter and an each would result in
non-filtered elements being looped
through twice, versus only once if the
check was made solely in the each
loop.
Through your code we can clearly see that you are using each in both cases, what is already a loop. And the filter by itself is another loop (with an if it for filtering). That is, we are comparing performance between two loops with one loop. Inevitably less loops = better performance.
I created this Fiddle and profiled with Firebug Profiling Tool. As expected, the second option with only one loop is faster. Of course with this small amount of elements the difference was only 0.062ms. But obviously the difference would increase linearly with more elements.
Since many people are super worried to say the difference is small and you should choose according to the maintainability, I feel free to express my opinion: I also agree with that. In fact I think the more maintainable code is without the filter, but it's only a matter of taste. Finally, your question was about what was more efficient and this is what was answered, although the difference is small.
You are correct that using filter and each is slower. It is faster to use just the each loop. Where possible do optimise it to use less loops.
But this is a micro optimisation. This should only be optimised when it's "free" and doesn't come at a cost of readable code. I would personally pick to use one or the other based on a style / readability preference rather then on performance.
Unless you've got a huge sets of DOM elements you won't notice the difference (and if you do then you've got bigger problems).
And if you care about this difference then you care about not using jQuery because jQuery is slow.
What you should care about is readability and maintainability.
$(selector).filter(function() {
// get elements I care about
}).each(function() {
// deal with them
});
vs
$(selector).each(function() {
// get elements I care about
if (condition) {
// deal with them
}
}
Whichever makes your code more readable and maintainable is the optimum choice. As a separate note filter is a lot more powerful if used with .map then if used with .each.
Let me also point out that optimising from two loops to one loop is optimising from O(n) to O(n). That's not something you should care about. In the past I also feel that it's "better" to put everything in one loop because you only loop once, but this really limits you in using map/reduce/filter.
Write meaningful, self-documenting code. Only optimise bottlenecks.
I would expect the performance here to be very similar, with the each being slightly faster (probably noticeable in large datasets where the filtered set is still large). Filter probably just loops over the set anyway (someone correct me if I'm wrong). So the first example loops the full set and then loops the smaller set. The 2nd just loops once.
However, if possible, the fastest way would be to include the filter in your initial selector. So lets say your current data variable is the result of calling $("div"). Instead of calling that and then filtering it, use this to begin with:
$("div[display='true']")
I generally don't worry about micro-optimizations like this since in the grand scheme of things, you'll likely have a lot more to worry about in terms of performance than jQuery .each() vs. .filter(), but to answer the question at hand, you should be able to get the best results using one .filter():
data.filter(function() {
return ($(this).attr('display')==="true");
}).appendTo($('body'));
For a primitive performance comparison between .each() and .filter(), you can check out this codepen:
http://codepen.io/thdoan/pen/LWpwwa
However, if all you're trying to do is output all nodes with display="true" to the page, then you can simply do as suggested by James Montagne (assuming the node is <element>):
$('element[display=true]').appendTo($('body'));

Javascript Efficiency with Constants passed as Parameters?

I have a general JavaScript question. I'll give you my scenario and then ask you the question.
Scenario
I am making a table with (currently) over 3000 rows, and it is growing by 5-10 every day. I am using a javascript plugin to style this table and add useful functionality. It currently takes 15 seconds to fully load the page, after which everything runs smoothly (sorting, paging, etc.). That is a very slow initial load, though. The plugin offers a way with less DOM parsing, where you pass it an array of the information to be placed inside the table, which I am very intrigued by. However, I want to make this as fast as possible, because there will still be an array of 3000 rows (each with 11 columns of, on average, 10 characters).
Question
Would it be significantly faster to use a JavaScript const to store this giant array? Specifically, does JavaScript know not to put a const on the stack when passed as a parameter?
Furthermore, is this simply too much for JavaScript to handle? Should I dismiss this idea and start with AJAX now (which would mean much slower functionality but much faster pageload)?
Thanks!
Because you say interaction is fast once the page is loaded I guess your biggest bottleneck is transfering the data over the wire.
I would send everything as JSON (compressed with gzip) which is very lightweight and fast to load.
I think styling should be done with CSS not JS. Also if you want the best UX initialize your table with less (1-200 elements), and then deal with the rest. It is better for the user if you show something right at the beginning.
Storing the array can't be a problem because GC will clear it up.

Categories