This is largely an 'am I doing it right / how can I do this better' kind of topic, with some concrete questions at the end. If you have other advice / remarks on the text below, even if I didn't specifically ask those questions, feel free to comment.
I have a MySQL table for users of my app that, along with a set of fixed columns, also has a text column containing a JSON config object. This is to store variable configuration data that cannot be stored in separate columns because it has different properties per user. There doesn't need to be any lookup / ordering / anything on the configuration data, so we decided this would be the best way to go.
When querying the database from my Node.JS app (running on Node 0.12.4), I assign the JSON text to an object and then use Object.defineProperty to create a getter property that parses the JSON string data when it is needed and adds it to the object.
The code looks like this:
user =
uid: results[0].uid
_c: results[0].user_config # JSON config data as string
Object.defineProperty user, 'config',
get: ->
#c = JSON.parse #_c if not #c?
return #c
Edit: above code is Coffeescript, here's the (approximate) Javascript equivalent for those of you who don't use Coffeescript:
var user = {
uid: results[0].uid,
_c: results[0].user_config // JSON config data as string
};
Object.defineProperty(user, 'config', {
get: function() {
if(this.c === undefined){
this.c = JSON.parse(this._c);
}
return this.c;
}
});
I implemented it this way because parsing JSON blocks the Node event loop, and the config property is only needed about half the time (this is in a middleware function for an express server) so this way the JSON would only be parsed when it is actually needed. The config data itself can range from 5 to around 50 different properties organised in a couple of nested objects, not a huge amount of data but still more than just a few lines of JSON.
Additionally, there are three of these JSON objects (I only showed one since they're all basically the same, just with different data in them). Each one is needed in different scenarios but all of the scenarios depend on variables (some of which come from external sources) so at the point of this function it's impossible to know which ones will be necessary.
So I had a couple of questions about this approach that I hope you guys can answer.
Is there a negative performance impact when using Object.defineProperty, and if yes, is it possible that it could negate the benefit from not parsing the JSON data right away?
Am I correct in assuming that not parsing the JSON right away will actually improve performance? We're looking at a continuously high number of requests and we need to process these quickly and efficiently.
Right now the three JSON data sets come from two different tables JOINed in an SQL query. This is to only have to do one query per request instead of up to four. Keeping in mind that there are scenarios where none of the JSON data is needed, but also scenarios where all three data sets are needed (and of course scenarios inbetween), could it be an improvement to only get the required JSON data from its table, at the point when one of the data sets is actually needed? I avoided this because I feel like waiting for four separate SELECT queries to be executed would take longer than waiting for one query with two JOINed tables.
Are there other ways to approach this that would improve the general performance even more? (I know, this one's a bit of a subjective question, but ideas / suggestions of things I should check out are welcome). I'm not looking to spin off parsing the JSON data into a separate thread though, because as our service runs on a cluster of virtualised single-core servers, creating a child process would only increase overall CPU usage, which at high loads would have even more negative impact on performance.
Note: when I say performance it mainly means fast and efficient throughput rates. We prefer a somewhat larger memory footprint over heavier CPU usage.
We should forget about small efficiencies, say about 97% of the time:
premature optimization is the root of all evil
- Donald Knuth
What do I get from that article? Too much time is spent in optimizing with dubious results instead of focusing on design and clarity.
It's true that JSON.parse blocks the event loop, but every synchronous call does - this is just code execution and is not a bad thing.
The root concern is not that it is blocking, but how long it is blocking. I remember a Strongloop instructor saying 10ms was a good rule of thumb for max execution time for a call in an app at cloud scale. >10ms is time to start optimizing - for apps at huge scale. Each app has to define that threshold.
So, how much execution time will your lazy init save? This article says it takes 1.5s to parse a 15MB json string - about 10,000 B/ms. 3 configs, 50 properties each, 30 bytes/k-v pair = 4500 bytes - about half a millisecond.
When the time came to optimize, I would look at having your lazy init do the MySQL call. A config is needed only 50% of the time, it won't block the event loop, and an external call to a db absolutely dwarfs a JSON.parse().
All of this to say: What you are doing is not necessarily bad or wrong, but if the whole app is littered with these types of dubious optimizations, how does that impact feature addition and maintenance? The biggest problems I see revolve around time to market, not speed. Code complexity increases time to market.
Q1: Is there a negative performance impact when using Object.defineProperty...
Check out this site for a hint.
Q2: *...not parsing the JSON right away will actually improve performance...
IMHO: inconsequentially
Q3: Right now the three JSON data sets come from two different tables...
The majority db query cost is usually the out of process call and the network data transport (unless you have a really bad schema or config). All data in one call is the right move.
Q4: Are there other ways to approach this that would improve the general performance
Impossible to tell. The place to start is with an observed behavior, then profiler tools to identify the culprit, then code optimization.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
Take, for instance, a JS web engine that implements associative arrays (objects) by using hash tables. but as we know hash tables have worse case O(n) because collisions are inevitable.
Suppose I begin to develop a new data structure using Javascript, a data structure such as LinkedList that has worse case O(1) for insert/delete. But since I implement it with object/array. Then it must be true that my implementation is also at minimum worse case O(n) as well.
I'm aware that this engine optimizes very well, and a good hash function will generate O(1) on average. However, I just want to confirm my realization that this isn't all as straightforward as the textbook says so. or is it?
I suppose at the root of all data structures implements with an array, since the access is always O(1), then shouldn't all data structures be built with an array without intermediary structures? also, the dynamic array still has delete O(n) cant that be the same problem that can trickle down just like my earlier example?
is this where the benefit of using a low-level programming language is better than using a high-level language? Such that low level there isn't so much abstraction and the textbook complexity numbers can actually match?
apologize if my ideas are all over the place.
But since I implement [my custom data structure] with object/array. Then it must be true that my implementation is also at minimum worse case O(n) as well.
No. Your linked list does not use an object with n keys anywhere. You'll have a
const linkedListNode = {
value: …,
next: null,
};
but even if this was implemented using a HashTable with O(n) worst-case member access, in your case n=2. There are not arbitrarily many properties in your object, just two. That's how you get back to O(1).
Is this where the benefit of using a low level programming language is better than using high level language? Such that low level there isn't so much abstraction and the textbook complexity numbers can actually match?
No. Even in a lower-level programming language, you can begin to question the underlying abstraction. You think in C, an indexed array access in memory is constant time? No, since page faults and other caching shenanigans come into play.
This is why textbook complexity is always defined in terms of a machine model. As long as you define object property access in your JavaScript execution model as constant-time (and it's a very reasonable assumption to do that! It closely resembles the real world), your numbers do apply to JavaScript code as well. Sure, you can try unravelling abstractions and analyse your high-level algorithms in terms of the primitives of a lower level, but there's no point in doing that. It's precisely why we have these abstractions in the first place.
"associative arrays (objects) by using hash tables" -> Javascript objects are complicated and are much more than just "it's a hash map". I don't know the exact technical details but I think they change to hash map after a certain amount of values are stored in them, along with that they also store metadata which is used on other algorithms like Object.keys to automatically sort the keys after they've been pulled out of the hash map. Again I don't know the technical details but I do know that, it's not straight forward.
"as we know hash tables has worse case O(n) because collisions are inevitable" -> It depends on what hashing you're using, but more than that even if collisions are inevitable it's not correct to just claim "it's worse case O(n)" and leave it at that because the probability of it being O(n) logarithmically declines to 0, the chances of it finding a collision time after time again and again is extremely unlikely, so while it can perhaps find a collision that doesn't effectively describe the time complexity.
"it must be true that my implementation is also at minimum worse case O(n) as well" -> Not correct, you're speaking about two different things. If you build a linked list each node will be connected to the next node using a heap reference, which has nothing to do with javascript objects. Iterating through an entire linked list will be O(n) but that's because of having to iterate over every next node, not because of anything with objects or hashes.
"worse case O(1) for insert/delete" -> This is only true if you have the reference to the node where you want to insert/delete it, otherwise you'll have to search through it before insertion/deletion. But that's exactly the same in javascript.
"then shouldn't all data structure be built with array without intermediary structures" -> Most data structures I know (like a list, stack, queue) are implemented on top of a normal array. The ones that aren't (like a binary tree, dictionary/map or a linked list) are not implemented on an array because it wouldn't really make sense. For example the whole point of using object references with a linked list is so that you can directly insert/delete something, using an array under the hood would just defeat the entire point of using a linked list when you're specifically trying to take advance of the object references.
"also dynamic array still has delete O(n) cant that be the same problem which can trickle down just like my earlier example" -> Not necessarily because when you wrap things inside an object and use an internal array inside, you can add metadata, indexes, hashes, things stored outside of the array (private to the object) and all sorts of other things to speed up and keep track of things on that array. So the complexity of what's used internally doesn't just automatically spill over to using it in another object. But you do need to be careful, like if you use a list then the inner workings of it in languages like C# is that it will double the internal array when you try to add more elements to it after it's full, this can result in a lot of memory waste.
That being said the use case for javascript is in 99% of cases not "optimize this by another 10ms", javascript is used because of its non IO blocking nature, streaming, async/await reactive programming and it's rapid speed of development, it's also used 90% for web communication, not some highly optimized graphics engine. So there's very very few edge cases where you need over 9000 complexity optimizations, feature development, code readability, maintainability, things like that are a much bigger deal in JS in general. Along with that in most use cases you aren't going to request 1m data records from your DB using JS, usually like 50 that you want to display on a page and for that you'll use the DB to optimize your query, there's hardly ever any need for using such large data structures in any JS development (or web development in general). It's a lot better to pull what you need and to request more or continuously stream what you need to the client. So a lot of the data structures (like a binary tree) aren't really relevant to things in JS unless it's a very specific use case.
Consider a JSP application with a couple of JavaScript files. The backend is fully localized, using several .properties files, one for each language. Rendered HTML contains strings in the correct language - all of this is the usual stuff and works perfectly.
Now however, from time to time I need to use some localized string in a JavaScript resource. Suppose e.g.:
function foo() {
alert('This string should be localized!');
}
Note that this is somewhat similar to the need to refer some AJAX endpoints from JavaScript, a problem well solved by a reverse JS router. However the key difference here is that the backend does not use the router, but it does use the strings.
I have come up with several approaches, but none of them seems to be good enough.
Parameters
JSP that renders the code to invoke foo() will fetch the string:
foo('<%= localize("alert.text") %>');
function foo(alertText) {
alert(alertText);
}
Pros: It works.
Cons: Method signatures are bloated.
Prototypes
JSP renders a hidden span with the string, JS fetches it:
<span id="prototype" class="hidden">This string should be localized!</span>
function foo() {
alert($('#prototype').text());
}
Pros: Method signatures are no longer bloated.
Cons: Must make sure that the hidden <span>s are always present.
AJAX
There is an endpoint that localizes strings by their key, JS calls it. (The code is approximate.)
function foo() {
$.ajax({ url : '/ajax/localize', data : { key : 'alert.text' } })
.done(function(result) {
alert(result);
} );
}
Pros: Server has full control over the localized result.
Cons: One HTTP call per localized string! Any of the AJAX calls fail, the logic breaks.
This can be improved by getting multiple strings at once, but the rountrip problem is an essential one.
Shared properties files
Property file containing the current language is simply exposed as an additional JS resource on the page.
<script src="/locales/en.js" /> // brings in the i18n object
function foo() {
alert(i18n.alert.text);
}
Pros: Fast and reliable.
Cons: All the strings are pulled in - also the ones we don't need or want to expose to the user.
This can be improved by keeping a separate set of strings for JS, but that violates the DRY principle.
Now what?
So that's it, that's the ideas I've had. None of them is ideal, all have their own share of problems. I am currently usings the first two approaches, with a mixed success. Are there any other options?
Your idea with a shared properties file is the neater solution out of the 4 ideas you suggested. A popular CMS I use called Silverstripe actually does the same thing, loads a localised JS file that adds the strings to a dictionary, allowing a common API for retrieving the strings.
One point made in the comments is about including localised strings for a particular view. While this can have some uses under particular situations where you have thousands of strings per localisation (totaling more than a few hundred KBs), it can also be a little unnecessary.
Client-side
Depending how many other JS resources you are loading at the same time, you may not want another request per view just to add a few more strings for that locale. Each view's localisation would need to be requested separately which can be a little inefficient. Give the browser all the localisations in one request and let it just read from its cache for each view.
The browser caching the complete collection of locale strings can lead to a better user experience with faster page load times with one less request per view. For mobile users, this can be quite helpful as even with faster mobile internet, not every single request is lightning fast.
Server-side
If you go by the method suggested in the comments by having a locale.asp file generating the JS locale strings on the fly, you are giving the server a little more work per user. This won't be that bad if each user requests it once however if it is request per view, it might start adding up.
If the user views 5 different pages, that is 5 times the server is executing the JSP, building the data for the particular view. While your code might be basic if-statements and loading a few files from the filesystem, there is still overhead in executing that code. While it might not be a problem say for 10 requests per minute, it could lead to issues with 1,000 requests per minute.
Again, that extra overhead can be small but it just simply isn't necessary unless you really want many small HTTP requests instead of few larger HTTP requests and little browser caching.
Additional Thoughts
While this might sound like premature optimisation, I think it is a simple and important thing to consider. We don't know whether you have 5 users or 5,000, whether your users go see 5 different views or 500, whether you will have many/any mobile users, how many locales you want to support, how many different strings per locale you have.
Because of this I think it is best to see the larger picture of what the choice of having locale strings downloaded per view would do.
I was wondering if anyone can recommend app.config settings for map and reduce Javascript VM pools?
My current setup consists of two (2) Amazon EC2 m1.medium instanes in the cluster. Each server has a single CPU with ~4GB of RAM. My ring size is set to 64 partitions, with 8 JS VMs for map phases, 16 JS VMs for reduce, and 2 for hooks. I am planning on adding another instance on the cluster, to make it 3, but I'm trying to stretch as much as possible until then.
I recently encountered high wait times for queries on a set of a few thousand records (the query was to fetch the most recent 25 news feeds from a bucket of articles), resulting in timeouts. As a workaround, I passed "reduce_phase_only_1" as an argument. My query was structured as follows:
1) 2i index search
2) map phase to filter out deleted articles
3) reduce phase to sort on creation time (this is where i added reduce_phase_only_1 arg)
4) reduce phase to slice the top of results
Anyone know how to alleviate the bottleneck?
Cheers,
-Victor
Your Map phase functions are going to execute in parallel close to the data while the reduce phase generally runs iteratively on a single node using a single VM. You should therefore increase the number of VMs in the pool for map phases and reduce the pool size for Reduce phases. This has been described in greater detail here.
I would also recommend not using the reduce_phase_only_1 flag as it will allow you to pre-reduce if volumes grow, although this will result in a number of reduce phase functions running in parallel, which will require a larger pool size. You could also merge your two reduce phase functions into one and at each stage sort before cutting excessive results.
MapReduce is a flexible way to query your data, but also quite expensive, especially compared to direct key access. It is therefore best suited for batch type jobs where you can control the level of concurrency and the amount of load you put on the system through MapReduce. It is generally not recommended to use it to serve user driven queries as it can overload the cluster if there is a spike in traffic.
Instead of generating the appropriate data for every request, it is very common to de-normalise and pre-compute data when using Riak. In your case you might be able to keep lists of news in separate summary objects and update these as news are inserted, deleted or updated. This adds a bit more work when inserting, but will make reads much more efficient and scalable as it can be served through a single GET request rather than a MapReduce job. If you have a read heavy application this is often a very good design.
If inserts and updates are too frequent, thereby making it difficult to update these summary objects efficiently, it may be possible to have a batch job do this at specific time intervals instead if it is acceptable that the view may not be 100% up to date.
I'm embedding a large array in <script> tags in my HTML, like this (nothing surprising):
<script>
var largeArray = [/* lots of stuff in here */];
</script>
In this particular example, the array has 210,000 elements. That's well below the theoretical maximum of 231 - by 4 orders of magnitude. Here's the fun part: if I save JS source for the array to a file, that file is >44 megabytes (46,573,399 bytes, to be exact).
If you want to see for yourself, you can download it from GitHub. (All the data in there is canned, so much of it is repeated. This will not be the case in production.)
Now, I'm really not concerned about serving that much data. My server gzips its responses, so it really doesn't take all that long to get the data over the wire. However, there is a really nasty tendency for the page, once loaded, to crash the browser. I'm not testing at all in IE (this is an internal tool). My primary targets are Chrome 8 and Firefox 3.6.
In Firefox, I can see a reasonably useful error in the console:
Error: script stack space quota is exhausted
In Chrome, I simply get the sad-tab page:
Cut to the chase, already
Is this really too much data for our modern, "high-performance" browsers to handle?
Is there anything I can do* to gracefully handle this much data?
Incidentally, I was able to get this to work (read: not crash the tab) on-and-off in Chrome. I really thought that Chrome, at least, was made of tougher stuff, but apparently I was wrong...
Edit 1
#Crayon: I wasn't looking to justify why I'd like to dump this much data into the browser at once. Short version: either I solve this one (admittedly not-that-easy) problem, or I have to solve a whole slew of other problems. I'm opting for the simpler approach for now.
#various: right now, I'm not especially looking for ways to actually reduce the number of elements in the array. I know I could implement Ajax paging or what-have-you, but that introduces its own set of problems for me in other regards.
#Phrogz: each element looks something like this:
{dateTime:new Date(1296176400000),
terminalId:'terminal999',
'General___BuildVersion':'10.05a_V110119_Beta',
'SSM___ExtId':26680,
'MD_CDMA_NETLOADER_NO_BCAST___Valid':'false',
'MD_CDMA_NETLOADER_NO_BCAST___PngAttempt':0}
#Will: but I have a computer with a 4-core processor, 6 gigabytes of RAM, over half a terabyte of disk space ...and I'm not even asking for the browser to do this quickly - I'm just asking for it to work at all! ☹
Edit 2
Mission accomplished!
With the spot-on suggestions from Juan as well as Guffa, I was able to get this to work! It would appear that the problem was just in parsing the source code, not actually working with it in memory.
To summarize the comment quagmire on Juan's answer: I had to split up my big array into a series of smaller ones, and then Array#concat() them, but that wasn't enough. I also had to put them into separate var statements. Like this:
var arr0 = [...];
var arr1 = [...];
var arr2 = [...];
/* ... */
var bigArray = arr0.concat(arr1, arr2, ...);
To everyone who contributed to solving this: thank you. The first round is on me!
*other than the obvious: sending less data to the browser
Here's what I would try: you said it's a 44MB file. That surely takes more than 44MB of memory, I'm guessing this takes much over 44MB of RAM, maybe half a gig. Could you just cut down the data until the browser doesn't crash and see how much memory the browser uses?
Even apps that run only on the server would be well served to not read a 44MB file and keep it in memory. Having said all that, I believe the browser should be able to handle it, so let me run some tests.
(Using Windows 7, 4GB of memory)
First Test
I cut the array in half and there were no problems, uses 80MB, no crash
Second Test
I split the array into two separate arrays, but still contains all the data, uses 160Mb, no crash
Third Test
Since Firefox said it ran out of stack, the problem is probably that it can't parse the array at once. I created two separate arrays, arr1, arr2 then did arr3 = arr1.concat(arr2); It ran fine and uses only slightly more memory, around 165MB.
Fourth Test I am creating 7 of those arrays (22MB each) and concatting them to test browser limits. It takes about 10 seconds for the page to finish loading. Memory goes up to 1.3GB, then it goes back down to 500MB. So yeah chrome can handle it. It just can't parse it all at once because it uses some kind of recursion as can be noticed by the console's error message.
Answer Create separate arrays (less than 20MB each) and then concat them. Each array should be on its own var statement, instead of doing multiple declarations with a single var.
I would still consider fetching only the necessary part, it may make the browser sluggish. however, if it's an internal task, this should be fine.
Last point: You're not at maximum memory levels, just max parsing levels.
Yes, it's too much to ask of a browser.
That amount of data would be managable if it already was data, but it isn't data yet. Consider that the browser has to parse that huge block of source code while checking that the syntax adds up for it all. Once parsed into valid code, the code has to run to produce the actual array.
So, all of the data will exist in (at least) two or three versions at once, each with a certain amount of overhead. As the array literal is a single statement, each step will have to include all of the data.
Dividing the data into several smaller arrays would possibly make it easier on the browser.
Do you really need all the data? can't you stream just the data currently needed using AJAX? Similar to Google Maps - you can't fit all the map data into browser's memory, they display just the part you are currently seeing.
Remember that 40 megs of hard data can be inflated to much more in browser's internal representation. For example the JS interpreter may use hashtable to implement the array, which would add additional memory overhead. Also, I expect that the browsers stores both source code and the JS memory, that alone doubles the amount of data.
JS is designed to provide client-side UI interaction, not handle loads of data.
EDIT:
Btw, do you really think users will like downloading 40 megabytes worth of code? There are still many users with less than broadband internet access. And execution of the script will be suspended until all the data is downloaded.
EDIT2:
I had a look at the data. That array will definitely be represented as hashtable. Also many of the items are objects, which will require reference tracking...that is additional memory.
I guess the performance would be better if it was simple vector of primitive data.
EDIT3: The data could certainly be simplified. The bulk of it are repeating strings, which could be encoded
in some way as integers or something. Also, my Opera is having trouble just displaying the text, let alone interpreting it.
EDIT4: Forget the DateTime objects! Use unix era timestamps or strings, but not objects!
EDIT5: Your processor doesn't matter because JS is single-threaded. And your RAM doesn't matter either, most browsers are 32bit, so they can't use much of that memory.
EDIT6: Try changing the array indices to sequential integers (0, 1, 2, 3...). That might make the browser use more efficient array data structure. You can use constants to access the array items efficiently. This is going to cut down the array size by huge chunk.
Try retrieving the data with Ajax as an JSON page. I don't know the exact size but I've been able to pull large amounts of data into Google Chrome that way.
Use lazy loading. Have pointers to the data and get it when the user asks.
This technique is used in various places to manage millions of records of data.
[Edit]
I found what I was looking for. Virtual scrolling in the jqgrid. That's 500k records being lazy loaded.
I would try having it as one big string with separator between each "item" then use split, something like:
var largeString = "item1,item2,.......";
var largeArray = largeString.split(",");
Hopefully string won't exhaust the stack so fast.
Edit: in order to test it I created dummy array with 200,000 simple items (each item one number) and Chrome loaded it within an instant. 2,000,000 items? Couple of seconds but no crash. 6,000,000 items array (50 MB file) made Chrome load for about 10 seconds but still, no crash in either ways.
So this leads me to believe the problem is not with the array itself but rather it's contents.. optimize the contents to simple items then parse them "on the fly" and it should work.
I was reading this book - Professional Javascript for Web Developers where the author mentions string concatenation is an expensive operation compared to using an array to store strings and then using the join method to create the final string. Curious, I did a couple test here to see how much time it would save and this is what I got -
http://jsbin.com/ivako
Somehow, the Firefox usually produces somewhat similar times to both ways, but in IE, string concatenation is much much faster. So, can this idea now be considered outdated (browsers probably have improved since?
Even if it were true and the join() was faster than concatenation it wouldn't matter. We are talking about tiny amounts of miliseconds here which are completely negligible.
I would always prefer well structured and easy to read code over microscopic performance boost and I think that using concatenation looks better and is easier to read.
Just my two cents.
On my system (IE 8 in Windows 7) the times of StringBuilder in that test very from about 70-100% in range -- that is, it is not stable -- although the mean is about 95% of that of the normal appending.
While it's easy now just to say "premature optimization" (and I suspect that in almost every case it is) there are things worth considering:
The problem with repeated string concatenation comes repeated memory allocations and repeated data copies (advanced string data-types can reduce/eliminate much of this, but let's keep assuming a simplistic model for now). From this lets raise some questions:
What memory allocation is used? In the naive case each str+=x requires str.length+x.length new memory to be allocated. The standard C malloc, for instance, is a rather poor memory allocator. JS implementations have undergone changes over the years including, among other things, better memory subsystems. Of course these changes don't stop there and touch really all aspects of modern JS code. Because now ancient implementations may have been incredibly slow in certain tasks does not necessarily imply that the same issues still exist, or to the same extents.
As with above the implementation of Array.join is very important. If it does NOT pre-allocate memory for the final string before building it then it only saves on data-copy costs -- how many GB/s is main memory these days? 10,000 x 50 is hardly pushing a limit. A smart Array.join operation with a POOR MEMORY ALLOCATOR would be expected to perform a good bit better simple because the amount of re-allocations is reduced. This difference would be expected to be minimized as allocation cost decreases.
The micro-benchmark code may be flawed depending on if the JS engine creates a new object per each UNIQUE string literal or not. (This would bias it towards the Array.join method but needs to be considered in general).
The benchmark is indeed a micro benchmark :)
Increase the growing size should have an impact of performance based on any or all (and then some) above conditions. It is generally easy to show extreme cases favoring some method or another -- the expected use case is generally of more importance.
Although, quite honestly, for any form of sane string building, I would just use normal string concatenation until such a time it was determined to be a bottleneck, if ever.
I would re-read the above statement from the book and see if there perhaps other implicit considerations the author was indeed meaning to invoke such as "for very large strings" or "insane amounts of string operations" or "in JScript/IE6", etc... If not, then such a statement is about as useful as "Insert sort is O(n*n)" [the realized costs depend upon the state of the data and the size of n of course].
And the disclaimer: the speed of the code depends upon the browser, operating system, the underlying hardware, moon gravitational forces and, of course, how your computer feels about you.
In principle the book is right. Joining an array should be much faster than repeatedly concatenating to the same string. As a simple algorithm on immutable strings it is demonstrably faster.
The trick is: JavaScript authors, being largely non-expert dabblers, have written a load of code out there in the wild that uses concatenating, and relatively little ‘good’ code that using methods like array-join. The upshot is that browser authors can get a better improvement in speed on the average web page by catering for and optimising the ‘bad’, more common option of concatenation.
So that's what happened. The newer browser versions have some fairly hairy optimisation stuff that detects when you're doing a load of concatenations, and hacks it about so that internally it is working more like an array-join, at more or less the same speed.
I actually have some experience in this area, since my primary product is a big, IE-only webapp that does a LOT of string concatenation in order to build up XML docs to send to the server. For example, in the worst case a page might have 5-10 iframes, each with a few hundred text boxes that each have 5-10 expando properties.
For something like our save function, we iterate through every tab (iframe) and every entity on that tab, pull out all the expando properties on each entity and stuff them all into a giant XML document.
When profiling and improving our save method, we found that using string concatention in IE7 was a lot slower than using the array of strings method. Some other points of interest were that accessing DOM object expando properties is really slow, so we put them all into javascript arrays instead. Finally, generating the javascript arrays themselves is actually best done on the server, then you write then onto the page as a literal control to be exectued when the page loads.
As we know, not all browsers are created equal. Because of this, performance in different areas is guaranteed to differ from browser to browser.
That aside, I noticed the same results as you did; however, after removing the unnecessary buffer class, and just using an array directly and a 10000 character string, the results were even tighter/consistent (in FF 3.0.12): http://jsbin.com/ehalu/
Unless you're doing a great deal of string concatenation, I would say that this type of optimization is a micro-optimization. Your time might be better spent limiting DOM reflows and queries (generally the use of document.getElementbyById/getElementByTagName), implementing caching of AJAX results (where applicable), and exploiting event bubbling (there's a link somewhere, I just can't find it now).
Okay, regarding this here is a related module:
http://www.openjsan.org/doc/s/sh/shogo4405/String/Buffer/0.0.1/lib/String/Buffer.html
This is an effective means of creating String buffers, by using
var buffer = new String.Buffer();
buffer.append("foo", "bar");
This is the fastest sort of implementation of String buffers I know of. First of all if you are implementing String Buffers, don't use push because that is a built-in method and it is slow, for one push iterates over the entire arguments array, rather then just adding one element.
It all really depends upon the implementation of the join method, some implementations of the join method are really slow and some are relatively large.