Why is a custom function slower than a builtin? - javascript

I'm messing around with the performances of JavaScript's push and pop functions.
I have an array called arr.
When I run this:
for (var i = 0; i < 100; i++) {
for (var k = 0; k < 100000; k++) {
arr.push(Math.ceil(Math.random() * 100));
arr.pop();
}
}
I get a time of 251.38515999977244 Milliseconds (I'm using performance.now() function).
But when I run a custom push and pop:
Array.prototype.pushy = function(value) {
this[this.length] = value;
}
Array.prototype.poppy = function() {
this.splice(-1, 1);
}
for (var i = 0; i < 100; i++) {
for (var k = 0; k < 100000; k++) {
arr.pushy(Math.ceil(Math.random() * 100));
arr.poppy();
}
}
The time is 1896.055750000014 Milliseconds.
Can anyone explain why there's such a huge difference between these?
To those who worry about time difference. I ran this test 100 times and computed an average time. I did that 5 times to ensure there weren't any outlying times.

Because the built-in function is written is whatever language the browser was written in (probably C++) and is compiled. The custom function is written in Javascript and is interpreted.
Generally interpreted languages are much slower than compiled ones. One usually doesn't notice this with Javascript because for the most part, you only execute a couple lines of JS between human interactions (which is always the slowest part).
Running JS in a tight loop as your done here, highlights the difference.

The reason is that the built-in function was specifically designed and optimized to perform a specific function. The browser takes whatever shortcuts possible when using the built-in function that it may not be as quick to recognize in the custom function. For example, with your implementation, the function needs to get the array length every single time the function is called.
Array.prototype.pushy = function(value) {
this[this.length] = value;
}
However, by simply using Array.prototype.push, the browser knows that the purpose is to append a value on to the array. While browsers may implement the function differently, I highly doubt any needs to compute the length of the array for every single iteration.

Related

Javascript print output one at a time instead of entire page at once?

Just playing around a bit, but noticed it's taking way too long for page to load, is there anyway to get it to print out one line at a time instead of having to wait till the entire page is loaded.
limits(){
var a = 0;
for (var i = 0; i < 1000; i++) {
for (var ii = 0; ii < 1000; ii++) {
document.getElementById('foo').innerHTML += "<p>" + a + "</p>";
a * 2;
}
}
}
Now how would I be able to control this better to where regardless of how long it takes to load print as soon as ready and even slowing it down would be fine.
The javascript method window.requestAnimationFrame(callback) will call your callback function on the next animation frame. It's commonly used for animation, and will probably work well for what you're doing.
To modify your code to use requestAnimationFrame, you have to make your function print a small chunk on its own, with a reference to know what chunk to print. If you stored your page contents in an array, for example, that could just be a starting index and a length. Since you are printing the increasing powers of 2, you can just pass in the last power of two and the number of lines you want to print for each run of the function.
You'll also need an exit condition -- a check within limits that if true, returns without requesting the next frame. I simply put a hard cap on the value of a, but you could also check that the index is less than array length (for my array of page contents idea above).
Because requestAnimationFrame passes in a function name as a callback, you can't pass arguments into it. Therefore, you have to use bind to sort of attach the values to the function. Then, within the function, you can access them using this. config is just an object to hold the initial arguments you want the function to have, and then you bind it, which allows you to access them within the function with this.numLines and this.a.
Then, when you request the next frame, you have to bind the values to limits again. If you are alright with keeping the arguments the same, you can just do limits.bind(this). But if you want to change them, you can create another object in a similar way to how I wrote config and bind that instead.
The following code seems to be a basic example of roughly what you're looking for:
var foo = document.getElementById('foo');
var maxA = 1000000000000000000000000000000000;
function limits() {
for(var i=0; i<this.numLines; ++i) {
foo.innerHTML += "<p>" + this.a + "</p>";
this.a *= 2;
if(this.a > maxA) {
return;
}
}
requestAnimationFrame(limits.bind(this));
}
config = {
numLines: 3,
a: 1
};
requestAnimationFrame(limits.bind(config));
And is implemented in JSFiddle here. I've also implemented a version where it puts each line at the top of the page (as opposed to appending it to the bottom), so that you can see it happening better (you can find that one here).
You can do something like this:
limits(){
var a = 0;
for (int i = 0; i < 1000; i++) {
for (int ii = 0; ii < 1000; ii++) {
setTimeout(function(){
document.getElementById('foo').innerHTML += "<p>" + a + "</p>";
a * 2;
}, 0);
}
}
}
You can adjust the time in the setTimeout, but even leaving it as zero will allow a more interactive experience even while the page builds. Setting it to 10 or 100 will of course slow it down considerably if you like.

Am I creating a ton of DOM objects with this jQuery code?

I'm making a hex dumper in JavaScript that will analyze the byte data of a file provided by the user. In order to properly display the data preview of the file, I'm escaping html characters using the methods from the top rated answer of this question.
function htmlEncode(value) { return $("<div/>").text(value).html(); }
function htmlDecode(value) { return $("<div/>").html(value).text(); }
I'm not asking for suggestions of how to best encode and decode html characters. What I am curious about is whether or not calling these functions hundreds of thousands of times in rapid succession is creating a metric butt-ton of DOM elements that will slow down the utility over time.
I've noticed that running my dumper on a small file (35 bytes), which thankfully runs almost instantaneously, takes much longer after I've run my dumper on a larger file (132,832 bytes) in the same session. The encode function is essentially run once for each byte.
I know JavaScript has a garbage collector, and these elements aren't tied to anything so I would assume they would get cleaned up after they're done being used, but I don't know the details or inner workings of the collector so I don't want to make any assumptions as to how quickly it will take care of the problem.
Theoretically it's possible that you're generating a lot of memory because you are creating numerous new elements. However, since they are not being added to the DOM, they should be cleaned up either on the next garbage collector cycle or while the stack is being popped (depends on how optimized the engine is).
But, as #juvian pointed out, you can get around this by having one dedicated element that you use for this operation. Not only will it ensure you aren't filling up your memory but it will also be faster since jQuery won't have to repeatedly process the <div/> string, create an element, generate a jQuery object, etc.
Here's my not-completely-scientifically-sound-but-definitely-good-enough-to-get-the-idea proof:
function now() {
if (typeof performance !== 'undefined') {
now = performance.now.bind(performance);
return performance.now();
} else {
now = Date.now.bind(Date);
return Date.now();
}
}
// Load the best available means of measuring the current time
now();
// Generate a whole bunch of characters
var data = [];
var totalNumberOfCharacters = 132832;
for (var i = 0; i < totalNumberOfCharacters; i++) {
data.push(String.fromCharCode((i % 26) + 65));
}
// Basic encode function
function htmlEncode(value) {
return $("<div/>").text(value).html();
}
// Cache a single <div> to improve performance
var $div = $('<div/>');
function cachedHtmlEncode(value) {
return $div.text(value).html();
}
// Encode using the unoptimized approach
var start = now();
var unoptimized = '';
for (var i = 0; i < totalNumberOfCharacters; i++) {
unoptimized += htmlEncode(data[i]);
}
var end = now();
console.log('unoptimized', end - start);
document.querySelector('pre').innerText = unoptimized;
// Encode using the optimized approach
start = now();
var optimized = '';
for (var i = 0; i < totalNumberOfCharacters; i++) {
optimized += cachedHtmlEncode(data[i]);
}
end = now();
console.log('optimized', end - start);
document.querySelector('pre').innerText = optimized;
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<pre></pre>

Is it possible to have quadratic time complexity without nested loops?

It was going so well. I thought I had my head around time complexity. I was having a play on codility and used the following algorithm to solve one of their problems. I am aware there are better solutions to this problem (permutation check) - but I simply don't understand how something without nested loops could have a time complexity of O(N^2). I was under the impression that the associative arrays in Javascript are like hashes and are very quick, and wouldn't be implemented as time-consuming loops.
Here is the example code
function solution(A) {
// write your code in JavaScript (Node.js)
var dict = {};
for (var i=1; i<A.length+1; i++) {
dict[i] = 1;
}
for (var j=0; j<A.length; j++) {
delete dict[A[j]];
}
var keyslength = Object.keys(dict).length;
return keyslength === 0 ? 1 : 0;
}
and here is the verdict
There must be a bug in their tool that you should report: this code has a complexity of O(n).
Believe me I am someone on the Internet.
On my machine:
console.time(1000);
solution(new Array(1000));
console.timeEnd(1000);
//about 0.4ms
console.time(10000);
solution(new Array(10000));
console.timeEnd(10000);
// about 4ms
Update: To be pedantic (sic), I still need a third data point to show it's linear
console.time(100000);
solution(new Array(100000));
console.timeEnd(100000);
// about 45ms, well let's say 40ms, that is not a proof anyway
Is it possible to have quadratic time complexity without nested loops? Yes. Consider this:
function getTheLengthOfAListSquared(list) {
for (var i = 0; i < list.length * list.length; i++) { }
return i;
}
As for that particular code sample, it does seem to be O(n) as #floribon says, given that Javascript object lookup should be constant time.
Remember that making an algorithm that takes an arbitrary function and determines whether that function will complete at all is provably impossible (halting problem), let alone determining complexity. Writing a tool to statically determine the complexity of anything but the most simple programs would be extremely difficult and this tool's result demonstrates that.

Efficient way to access members of Objects in JavaScript

My question somehow relates to this, but it still involves some key differences.
So here it is, I have following code;
for(var i = 0; i < someObj.children[1].listItems.length; i++)
{
doSomething(someObj.children[1].listItems[i]);
console.log(someObj.children[1].listItems[i]);
}
vs.
var i = 0,
itemLength = someObj.children[1].listItems.length,
item;
for(; i < itemLength; i++)
{
item = someObj.children[1].listItems[i];
doSomething(item);
console.log(item);
}
Now this is a very small exemplary part of code I deal with in an enterprise webapp made in ExtJS. Now here in above code, second example is clearly more readable and clean compared to first one.
But is there any performance gain involved when I reduce number of object lookups in similar way?
I'm asking this for a scenario where there'll be a lot more code within the loop accessing members deep within the object and iteration itself would be happening ~1000 times, and browser varies from IE8 to Latest Chrome.
There won't be a noticeable difference, but for performance and readability, and the fact that it does look like a live nodeList, it should probably be iterated in reverse if you're going to change it :
var elems = someObj.children[1].listItems;
for(var i = elems.length; i--;) {
doSomething(elems[i]);
console.log(elems[i]);
}
Performance gain will depend on how large the list is.
Caching the length is typically better (your second case), because someObj.children[1].listItems.length is not evaluated every time through the loop, as it is in your first case.
If order doesn't matter, I like to loop like this:
var i;
for( i = array.length; --i >= 0; ){
//do stuff
}
Caching object property lookup will result in a performance gain, but the extent of it is based on iterations and depth of the lookups. When your JS engine evaluates something like object.a.b.c.d, there is more work involved than just evaluating d. You can make your second case more efficient by caching additional property lookups outside the loop:
var i = 0,
items = someObj.children[1].listItems,
itemLength = items.length,
item;
for(; i < itemLength; i++) {
item = items[i];
doSomething(item);
console.log(item);
}
The best way to tell, of course, is a jsperf

is coffeescript faster than javascript?

Javascript is everywhere and to my mind is constantly gaining importance. Most programmers would agree that while Javascript itself is ugly, its "territory" sure is impressive. With the capabilities of HTML5 and the speed of modern browsers deploying an application via Javascript is an interesting option: It's probably as cross-platform as you can get.
The natural result are cross compilers. The predominant is probably GWT but there are several other options out there. My favourite is Coffeescript since it adds only a thin layer over Javascript and is much more "lightweight" than for example GWT.
There's just one thing that has been bugging me: Although my project is rather small performance has always been an important topic. Here's a quote
The GWT SDK provides a set of core Java APIs and Widgets. These allow
you to write AJAX applications in Java and then compile the source to
highly optimized JavaScript
Is Coffeescript optimized, too? Since Coffeescript seems to make heavy use of non-common Javascript functionality I'm worried how their performance compares.
Have you experience with Coffeescript related speed issues ?
Do you know a good benchmark comparison ?
Apologies for resurrecting an old topic but it was concerning me too. I decided to perform a little test and one of the simplest performance tests I know is to write consecutive values to an array, memory is consumed in a familiar manner as the array grows and 'for' loops are common enough in real life to be considered relevant.
After a couple of red herrings I find coffeescript's simplest method is:
newway = -> [0..1000000]
# simpler and quicker than the example from http://coffeescript.org/#loops
# countdown = (num for num in [10..1])
This uses a closure and returns the array as the result. My equivalent is this:
function oldway()
{
var a = [];
for (var i = 0; i <= 1000000; i++)
a[i] = i;
return a;
}
As you can see the result is the same and it grows an array in a similar way too. Next I profiled in chrome 100 times each and averaged.
newway() | 78.5ms
oldway() | 49.9ms
Coffeescript is 78% slower. I refute that "the CoffeeScript you write ends up running as fast as (and often faster than) the JS you would have written" (Jeremy Ashkenas)
Addendum: I was also suspicious of the popular belief that "there is always a one to one equivalent in JS". I tried to recreate my own code with this:
badway = ->
a = []
for i in [1..1000000]
a[i] = i
return a
Despite the similarity it still proved 7% slower because it adds extra checks for direction (increment or decrement) which means it is not a straight translation.
This is all quite intersting and there is one truth, coffee script cannot work faster than fully optimized javascript.
That said, since coffee script is generating javascript. There are ways to make it worth it. Sadly, it doesn't seem to be the case yet.
Lets take the example:
new_way = -> [0..1000000]
new_way()
It compiles to this with coffee script 1.6.2
// Generated by CoffeeScript 1.6.2
(function() {
var new_way;
new_way = function() {
var _i, _results;
return (function() {
_results = [];
for (_i = 0; _i <= 1000000; _i++){ _results.push(_i); }
return _results;
}).apply(this);
};
new_way();
}).call(this);
And the code provided by clockworkgeek is
function oldway()
{
var a = [];
for (var i = 0; i <= 1000000; i++)
a[i] = i;
return a;
}
oldway()
But since the coffee script hides the function inside a scope, we should do that for javascript too. We don't want to polute window right?
(function() {
function oldway()
{
var a = [];
for (var i = 0; i <= 1000000; i++)
a[i] = i;
return a;
}
oldway()
}).call(this);
So here we have code that does the same thing actually. And then we'd like to actually test both versions a couple of time.
Coffee script
for i in [0..100]
new_way = -> [0..1000000]
new_way()
Generated JS, and you may ask yourself what is going on there??? It's creating i and _i for whatever reason. It's clear to me from these two, only one is needed.
// Generated by CoffeeScript 1.6.2
(function() {
var i, new_way, _i;
for (i = _i = 0; _i <= 100; i = ++_i) {
new_way = function() {
var _j, _results;
return (function() {
_results = [];
for (_j = 0; _j <= 1000000; _j++){ _results.push(_j); }
return _results;
}).apply(this);
};
new_way();
}
}).call(this);
So now we're going to update our Javascript.
(function() {
function oldway()
{
var a = [];
for (var i = 0; i <= 1000000; i++)
a[i] = i;
return a;
}
var _i;
for(_i=0; _i <= 100; ++_i) {
oldway()
}
}).call(this);
So the results:
time coffee test.coffee
real 0m5.647s
user 0m0.016s
sys 0m0.076s
time node test.js
real 0m5.479s
user 0m0.000s
sys 0m0.000s
The js takes
time node test2.js
real 0m5.904s
user 0m0.000s
sys 0m0.000s
So you might ask yourself... what the hell coffee script is faster??? and then you look at the code and ask yourself... so let's try to fix that!
(function() {
function oldway()
{
var a = [];
for (var i = 0; i <= 1000000; i++)
a.push(i);
return a;
}
var _i;
for(_i=0; _i <= 100; ++_i) {
oldway()
}
}).call(this);
We'll then do a small fix to the JS script and change a[i] = i to a.push(i) And then lets try again...and then BOOM
time node test2.js
real 0m5.330s
user 0m0.000s
sys 0m0.000s
This small change made it faster than our CoffeeScript Now lets look at the generated CoffeeScript... and remove those double variables...
to this:
// Generated by CoffeeScript 1.6.2
(function() {
var i, new_way;
for (i = 0; i <= 100; ++i) {
new_way = function() {
var _j, _results;
return (function() {
_results = [];
for (_j = 0; _j <= 1000000; _j++){ _results.push(_j); }
return _results;
}).apply(this);
};
new_way();
}
}).call(this);
and BOOM
time node test.js
real 0m5.373s
user 0m0.000s
sys 0m0.000s
Well what I'm trying to say is that there are great benefits to use a higher language. The generated CoffeeScript wasn't optimized. But wasn't that far from the pure js code. The code optimization that clockworkgeek tried to use with using index directly instead of push actually seemed to backfire and worked slowlier than the generated coffeescript.
The truth it that such kind of optimization could be hard to find and fix. On the other side, from version to version, coffeescript could generate optimized js code for current browser or interpreters. The CoffeeScript would remain unchanged but could be generated again to speedup things.
If you write directly in javascript, there is now way to really optimize the code as much as one would with a real compiler.
The other interesting part is that one day, CoffeeScript or other generators to javascript could be used to analyse code (like jslint) and remove parts of the code where some variables aren't needed... Compile functions differently with different arguments to speed up things when some variables aren't needed. If you have purejs, you'll have to expect that there is a JIT compiler that will do the job right and its good for coffeescript too.
For example, I could optimize the coffee script one last time..by removing the new_way = (function... from inside the for loop. One smart programmer would know that the only thing that happen here is reaffection the function on each loop which doesn't change the variable. The function is created in the function scope and isn't recreated on each loop. That said it shouldn't change much...
time node test.js
real 0m5.363s
user 0m0.015s
sys 0m0.000s
So this is pretty much it.
Short answer: No.
CoffeeScript generates javascript, so its maximum possible speed equals to the speed of javascript. But while you can optimize js code at low-level (yeah, it sounds ironical) and gain some performance boost - with CoffeeScript you cannot do that.
But speed of code should not be your concern, when choosing CS over JS, as the difference is negligible for most tasks.
Coffescript compiles directly to JavaScript, meaning that there is always a one to one equivalent in JS for any Coffeescript source. There is nothing non-common about it. A performance gain can come from optimized things e.g. the fact that Coffescript stores the Array length in a separate variable in a for loop instead of requesting it in every iteration. But that should be a common practise in JavaScript, too, it is just not enforced by the language itself.
I want to add something to the answer of Loïc Faure-Lacroix...
It seems, that you only printed the times of one Browser. And btw "x.push(i)" is not faster that "x[i] = i" according to jsperf : https://jsperf.com/array-direct-assignment-vs-push/130
Chrome: push => 79,491 ops/s; direct assignment => 3,815,588 ops/s;
IE Edge: push => 358,036 ops/s; direct assignment => 7,047,523 ops/s;
Firefox: push => 67,123 ops/s; direct assignment => 206,444 ops/s;
Another point -> x.call(this) and x.apply(this)... I don't see any performance reason to that. Even jsperf confirms by that: http://jsperf.com/call-apply-segu/18
Chrome:
direct call => 47,579,486 ops/s; x.call => 45,239,029 ops/s; x.apply => 15,036,387 ops/s;
IE Edge:
direct call => 113,210,261 ops/s; x.call => 17,771,762 ops/s; x.apply => 6,550,769 ops/s;
Firefox:
direct call => 780,255,612 ops/s; x.call => 76,210,019 ops/s; x.apply => 2,559,295 ops/s;
First to mention - I used the actual Browsers.
Secondly - I extended the test by a for-loop, because with one call the test is to short...
Last but not least - now the tests for all browsers are like the following:
Here I used CoffeeScript 1.10.0 (compiled with the same code given in his answer)
console.time('coffee');// added manually
(function() {
var new_way;
new_way = function() {
var i, results;
return (function() {
results = [];
for (i = 0; i <= 1000000; i++){ results.push(i); }
return results;
}).apply(this);
};
// manually added on both
var i;
for(i = 0; i != 10; i++)
{
new_way();
}
}).call(this);
console.timeEnd('coffee');// added manually
Now the Javascript
console.time('js');
(function() {
function old_way()
{
var i = 0, results = [];
return (function()
{
for (i = 0; i <= 1000000; i++)
{
results[i] = i;
}
return results;
})();// replaced apply
}
var i;
for(i = 0; i != 10; i++)
{
old_way();
}
})();// replaced call
console.timeEnd('js');
The limit value of the for loop is low, because any higher it would be a pretty slow testing (10 * 1000000 calls)...
Results
Chrome: coffee: 305.000ms; js: 258.000ms;
IE Edge: coffee: 5.944,281ms; js: 3.517,72ms;
Firefox: coffee: 174.23ms; js: 159.55ms;
Here I must have to mention, that not always coffee was the slowest in this test. You can see that by testing those codes in firefox.
My final answer:
First to say - I am not really familiar with coffeescript, but I looked into it, because I am using the Atom Editor and wanted to try to build my first package there, but drived back to Javascript...
So if there is anything wrong you can correct me.
With coffeescript you can write less code, but if it comes to optimization, the code gets heavy. My own opinion -> I don't see any so called "productiveness" in this Coffeescripting language...
To get back to the performances :: The most used browser is the Chrome Browser (src: w3schools.com/browsers/browsers_stats.asp) with 60% and my testings also have shown that manually typed Javascript runs a bit faster than Coffeescript (except IE ... - much faster). I would recommend Coffeescript for smaller projects, but if no one minds, stay the language you like.

Categories