What is the relative processing speed of JSONObject.item vs JSONObject['item']? - javascript

I'm working on speeding up some old JavaScript code that uses a lot of the following structure:
var obj = {
attr1: value,
attr2: value2,
...
attrN: valueN
};
someFunction(obj['attr1']);
JSHint gives me the following advice:
['attr1'] is better written in dot notation.
So it prefers obj.attr1 over obj['attr1']. I understand the aesthetic reason for this warning (explained here), but which notation is faster? I would think that the former would be more efficient because the latter involves the conversion of a string literal, but I have nothing other than speculation to back that up.
Any help you can offer would be much appreciated.

They're almost even. See these two jsperf examples:
http://jsperf.com/dot-operator-vs-array-notation
http://jsperf.com/dot-notation-vs-bracket-notation/2
Both of them show them being within 1% similar, however, they both show that array notation is ever-so-slightly faster.
EDIT:
Browsing through newly created jsperfs, I found these two:
http://jsperf.com/mpaaa
http://jsperf.com/property-dot-versus-string
They both show almost the same, and, actually, after testing them multiple times, they showed different results (sometimes dot was faster, sometimes array notation)
It's a tie
ANOTHER EDIT:
The browserscope is wrong, although, at least for me, it shows some very uneven tests in other browsers, I tested it in one of the ones it showed a huge difference in, only to find similar results to what I already had found

Related

JavaScript Iterative Variable Filtering

Is there a tool that allows us to search for a javascript variable just like a memory editor does: through iterative filtering either by exact value or change?
(Sorry for the long intro, but it's the best way I found to describe my use case.)
When I was about 14 yrs old, I would use a memory editor to find, monitor, and edit variables in games.
This allowed me to understand a bit better how computers work but also allowed me to have fun, changing the variables of the games to whatever I liked (offline, of course ;) )
The program would show me all variables.
I would then reduce the list of variables by repeatedly filtering: either by searching for its exact value (if it was known) or by change (increase, decrease).
Now I find myself wanting to do the same for Javascript. I've searched for a while, and I've tried different things, including searching for the variables in the window variable in console (please keep in mind I'm not a javascript developer) or using the debug function (which works great if you know where the variable is) but I haven't found a similar solution.
Is it true that no tool exists like this?
There so many use-cases:
debugging: finding where that number with that weird value is;
fun: edit variables just for plain fun, in games, etc;
learning how to code: I learned how to program by "hacking" around, and I know I'm not the only one ;)
probably many others I can't think about.
Does anyone know anything like this?

Why do some arrays appear with letters next to the value in chrome inspector?

I am using chrome browser inspector to examine some javascript arrays. I notice that certain arrays (that are being generated by a library that I am using - three.js via a-frame) are represented with lowercase letters next to the value.
Please see below an example - you can see the lower case cs that accompany each value;
And here is one that I have generated myself for comparison - rather than these cs there is a generic object representation. This is what I am accustomed to;
Why do those letters appear? What is the difference between these arrays?
I ask because 1. I am curious and 2. I am seeing different behaviour between these 2 types of array and am wondering whether this is an indication of a different format that I am not aware of.
If there is a difference that people are aware of, I wonder whether its possible to convert one to the other quickly? So that I can eliminate that as a reason for any bugs?
Thanks as ever for any advice, If you need more information to answer, please let me know.
Use JSON.parse(JSON.stringify(array)) on both arrays to convert everything to plain arrays with plain objects.
The only thing to care is that objects don't have circular references but from what I can see yours don't.

What is the complexity of JSON.parse() in JavaScript?

The title says it all. I'm going to be parsing a very large JSON string and was curious what the complexity of this built in method was.
I would hope that it's θ(n) where n is the number of characters in the string since it can determine whether there is a syntax error or not.
I tried searching but couldn't come up with anything.
JSON is very simple grammar that does not require even lookaheads. As soon as GC is not involved then it is purely O(n).
I do not know of the implementations in browsers, but your assumption is correct to a certain point. If the JSON includes mainly strings, it will be straight forward and very linear. If you have many floating points, it will take a bit of time to convert the numbers, but again quite linear (numbers with more digits take slightly longer, but in comparison to a long string... very similar).
Since in most cases arrays and objects are declared as maps, the memory allocation grows as required and will generally be linear. Many (if not most) implementations will make use of Java as a backend. This means garbage collection and thus a quite impossible way to know for sure how much time will be required to transform all the data as it will very much depend on things such as the size of the memory model used on the target computer and how often the garbage collection runs. However, it should generally just grow as items are added to the map and it will thus mostly look like it is linear as well. I would not expect an implementation to make use of a realloc() which would mean copying data and thus being slower and slower as an array/object grows bigger and bigger.
Was curious a little more so to add more info I believe this is the "high-level" implementation of JSON.parse. I tried finding if Chromium has their own source for it and not sure if this is it? This is going off of the source from Github.
Things to note:
worst case is probably the scenario of handling objects which requires O(N) time where N is the number of characters.
in the case a reviver function is passed, it has to rewalk the entire object after it's created but that only happens once so it's fairly negligible. Also depends what the reviver function is doing and you will have to account for it's own time complexity.

why the third option is better than regex?

I think regex is pretty fast and the third option is confusing. What do you think?
http://jqfundamentals.com/book/ch09s12.html
// old way
if (type == 'foo' || type == 'bar') { ... }
// better
if (/^(foo|bar)$/.test(type)) { ... }
// object literal lookup
if (({ foo : 1, bar : 1 })[type]) { ... }
I'll humbly disagree with Rebecca Murphey and vote for simplicity, for the first option.
I think regex is pretty fast
Machine code is even faster, but we don't use it.
the third option is confusing
It's only confusing if you're unfamiliar with the trick. (And for people not used to seeing regex to compare two strings, second option will be even more confusing.)
I just made a rudimentary benchmark and I'm honestly not sure how she got those results...
http://jsbin.com/uzuxi4/2/edit
Regex seems to scale the best, but the first is by far the fastest on all modern browsers. The last is excruciatingly slow. I understand the complexity theory between the three, but in practice, it doesn't seem that she's correct.
Let alone the fact that the first also has the best readability, it also seems to be the fastest. I even nested loops to take advantage of any browser caching of literal tables or constants (to no avail).
Edit:
It appears that when an object is explicitly created, she is indeed correct, however: http://jsbin.com/uzuxi4/4/edit
function __hash() {
...
var type = 'bar';
var testobj = { foo : 1, bar : 1 };
var c = 0;
for (i = 0; i < 1000; i++) {
if (testobj[type]) {
for (j = 0; j < 10000; j++) {
if (testobj[type]) { c++; }
}
}
}
...
}
We see that once the object has an internal reference, the seek time drops to about 500 ms which is probably the plateau. Object key lookup may be the best for larger data-sets, but in practice I don't really see it as a viable option for every-day use.
The first option involves
potentially two string compares.
The second option involves a parse each time.
The third option does a simple hash of the string and then a hash table look
up, which is the most efficient in this case, in terms of the amount of work that needs to be done.
The third option also scales better than the other two as more alternative strings are added, because the first two are O(n) and the third is O(1) in the average case.
If we want to talk about which option is prettier / more maintainable, that's a whole separate conversation.
The first case should really be done with === to avoid any type coercions, but depending on the number of alternatives you need to check it can become O(N), however depending on your code most JS engines will be able to a simple pointer check for the comparison.
In the second case you use a RegExp, and while RegExps are very fast, they tend to be slower for simple equality decisions than more direct equality comparisons. Simple string comparisons like yours are likely to be a pointer compare in a modern JS engine, but if you use a regexp the regexp must read every character.
The third case is more tricky -- if you do have a lot of values to check it may be faster, especially if you cache the object rather than repeatedly recreating it as it will simply be a hash lookup -- the exact performance of the lookup depends on the engine though.
I suspect a switch statement would beat the object literal case though.
Out of curiosity I made a test (which you can see here), the fastest approach (in a webkit nightly at least) seems to be a switch statement, followed by if, followed by the object, with regexp's last.
Just wanted to weigh in here and remind everyone that this is an open-source book with contributions from many people! The section being discussed, indeed, is based on content provided by a community member. If you have suggestions for improving the section, by all means, please open an issue on the repository, or better, fork the repo and send me a pull request :)
That said, I have just set up a jsPerf test (http://jsperf.com/string-tests), and at least in Chrome, the results are the opposite of what the book says. I've opened an issue on the book, and will try to deal with this in the near future.
Finally, two things:
I want to echo what another commenter said: perf optimizations are fun to talk about, and while there are some that really do matter, many don't. It's important to keep perspective on how much -- or little -- of a difference stuff like this makes.
I also want to echo the commenter who said, essentially, that readability is in the eyes of the beholder. Something confusing to one person may be perfectly clear to another. I do believe we should strive for readability, but I think there's a happy medium. Reading code that was a bit perplexing to me at first opened my eyes to a lot of great techniques; I'd have hated if it had been written so the complete newb that I was at the time could understand it.
The object literal lookup is optimized with a hash lookup which only requires one logical check instead of n. In a longer list you also won't have to repeat "type == " a zillion times.
For simplicity and readability the first will win every time. It might not be as fast, but who cares unless it is in a heavily run loop.
Good compilers should optimize things like this away.

String concatenation vs string buffers in Javascript

I was reading this book - Professional Javascript for Web Developers where the author mentions string concatenation is an expensive operation compared to using an array to store strings and then using the join method to create the final string. Curious, I did a couple test here to see how much time it would save and this is what I got -
http://jsbin.com/ivako
Somehow, the Firefox usually produces somewhat similar times to both ways, but in IE, string concatenation is much much faster. So, can this idea now be considered outdated (browsers probably have improved since?
Even if it were true and the join() was faster than concatenation it wouldn't matter. We are talking about tiny amounts of miliseconds here which are completely negligible.
I would always prefer well structured and easy to read code over microscopic performance boost and I think that using concatenation looks better and is easier to read.
Just my two cents.
On my system (IE 8 in Windows 7) the times of StringBuilder in that test very from about 70-100% in range -- that is, it is not stable -- although the mean is about 95% of that of the normal appending.
While it's easy now just to say "premature optimization" (and I suspect that in almost every case it is) there are things worth considering:
The problem with repeated string concatenation comes repeated memory allocations and repeated data copies (advanced string data-types can reduce/eliminate much of this, but let's keep assuming a simplistic model for now). From this lets raise some questions:
What memory allocation is used? In the naive case each str+=x requires str.length+x.length new memory to be allocated. The standard C malloc, for instance, is a rather poor memory allocator. JS implementations have undergone changes over the years including, among other things, better memory subsystems. Of course these changes don't stop there and touch really all aspects of modern JS code. Because now ancient implementations may have been incredibly slow in certain tasks does not necessarily imply that the same issues still exist, or to the same extents.
As with above the implementation of Array.join is very important. If it does NOT pre-allocate memory for the final string before building it then it only saves on data-copy costs -- how many GB/s is main memory these days? 10,000 x 50 is hardly pushing a limit. A smart Array.join operation with a POOR MEMORY ALLOCATOR would be expected to perform a good bit better simple because the amount of re-allocations is reduced. This difference would be expected to be minimized as allocation cost decreases.
The micro-benchmark code may be flawed depending on if the JS engine creates a new object per each UNIQUE string literal or not. (This would bias it towards the Array.join method but needs to be considered in general).
The benchmark is indeed a micro benchmark :)
Increase the growing size should have an impact of performance based on any or all (and then some) above conditions. It is generally easy to show extreme cases favoring some method or another -- the expected use case is generally of more importance.
Although, quite honestly, for any form of sane string building, I would just use normal string concatenation until such a time it was determined to be a bottleneck, if ever.
I would re-read the above statement from the book and see if there perhaps other implicit considerations the author was indeed meaning to invoke such as "for very large strings" or "insane amounts of string operations" or "in JScript/IE6", etc... If not, then such a statement is about as useful as "Insert sort is O(n*n)" [the realized costs depend upon the state of the data and the size of n of course].
And the disclaimer: the speed of the code depends upon the browser, operating system, the underlying hardware, moon gravitational forces and, of course, how your computer feels about you.
In principle the book is right. Joining an array should be much faster than repeatedly concatenating to the same string. As a simple algorithm on immutable strings it is demonstrably faster.
The trick is: JavaScript authors, being largely non-expert dabblers, have written a load of code out there in the wild that uses concatenating, and relatively little ‘good’ code that using methods like array-join. The upshot is that browser authors can get a better improvement in speed on the average web page by catering for and optimising the ‘bad’, more common option of concatenation.
So that's what happened. The newer browser versions have some fairly hairy optimisation stuff that detects when you're doing a load of concatenations, and hacks it about so that internally it is working more like an array-join, at more or less the same speed.
I actually have some experience in this area, since my primary product is a big, IE-only webapp that does a LOT of string concatenation in order to build up XML docs to send to the server. For example, in the worst case a page might have 5-10 iframes, each with a few hundred text boxes that each have 5-10 expando properties.
For something like our save function, we iterate through every tab (iframe) and every entity on that tab, pull out all the expando properties on each entity and stuff them all into a giant XML document.
When profiling and improving our save method, we found that using string concatention in IE7 was a lot slower than using the array of strings method. Some other points of interest were that accessing DOM object expando properties is really slow, so we put them all into javascript arrays instead. Finally, generating the javascript arrays themselves is actually best done on the server, then you write then onto the page as a literal control to be exectued when the page loads.
As we know, not all browsers are created equal. Because of this, performance in different areas is guaranteed to differ from browser to browser.
That aside, I noticed the same results as you did; however, after removing the unnecessary buffer class, and just using an array directly and a 10000 character string, the results were even tighter/consistent (in FF 3.0.12): http://jsbin.com/ehalu/
Unless you're doing a great deal of string concatenation, I would say that this type of optimization is a micro-optimization. Your time might be better spent limiting DOM reflows and queries (generally the use of document.getElementbyById/getElementByTagName), implementing caching of AJAX results (where applicable), and exploiting event bubbling (there's a link somewhere, I just can't find it now).
Okay, regarding this here is a related module:
http://www.openjsan.org/doc/s/sh/shogo4405/String/Buffer/0.0.1/lib/String/Buffer.html
This is an effective means of creating String buffers, by using
var buffer = new String.Buffer();
buffer.append("foo", "bar");
This is the fastest sort of implementation of String buffers I know of. First of all if you are implementing String Buffers, don't use push because that is a built-in method and it is slow, for one push iterates over the entire arguments array, rather then just adding one element.
It all really depends upon the implementation of the join method, some implementations of the join method are really slow and some are relatively large.

Categories