Hi basically what I want to do is passing a JavaScript array to a c module function, then the function modify the array in place, then JavaScript reads the modified array.
Current approach is use carrays.i and array_functions, create and converting Array to and from doubleArray and due to copying array, its giving me result worse than native JS. My array have about 41000 items.
C module: ~10ms(actual C function running time ~0.1ms)
JS module: ~3ms
For me, it's not possible to use doubleArray from very beginning (as this is a part of a larger process). So the question is how can I improve it? Is it possible to use TypedArray/ArrayBuffer? If yes then how?
following is my pseudo code
let cArray = MyCModule.new_doubleArray(array.length),
outArray = new Array(array.length);
arrayCopyJS2C(cArray, array);//written in JS and use a lot of time
MyCModule.MyCFunction(cArray, array.length);
arrayCopyC2JS(cArray, outArray);//also written in JS and use a lot of time
Yes, using an ArrayBuffer (with externalized backing store) is an efficient way to share a (number) array between JavaScript and C, because it doesn't require you to copy things around. That's assuming that you can use a TypedArray "from the beginning" on the JavaScript side; if the same limitation applies as to using a doubleArray from the beginning and you'd still have to copy, then the benefit will be smaller or nonexistent (depending on how fast you've made accesses to your doubleArray).
That said, V8 generates highly efficient code for operations on number arrays. I'm finding it hard to believe that the same function takes either 3ms in JS or 0.1ms in C. Can you share your JS implementation? If a C implementation is 30x as fast, then I bet the JS implementation could be improved a lot to get pretty close to that. Array operations are usually dominated by the time it takes to actually get the elements from memory, and no language has a particular advantage at that.
Related
Lau Jensen has this fantastic post on getting high performance from ClojureScript using arrays. One of the techniques he uses is an array get function that uses a mod function like so:
(defn mod [x m]
(js* "((x%m)+m)%m"))
(defn get-cell
[b x y]
(js* "b[brianscript.core.mod(y,90)][brianscript.core.mod(x,90)]"))
Does the mod function do anything special in JavaScript - or is this simply Lau not doing a bounds check elsewhere and including it in his get function?
My question is: Is mod required for ClojureScript array performance - or is it simply about bounds checking?
brianscript.core.mod() compiles down to exactly the same code as cljs.core.mod(). Here is the compiled clojurescript code:
cljs.core.mod = (function mod(n,d){return (((n % d) + d) % d);
His reimplementation of get-cell was also unnecessary: his original version using aset will compile to the same javascript code.
He cannot use the js % operator (actually pronounced "remainder", not "modulus") because he wants coordinate lookups in his neighbor-checking functions to "wrap around" the board. E.g. If he is at coordinate 0, and wants to check the cell to his left, the index is 89, not -1. -1 % 90 = -1, but mod(-1, 90) = 89.
If he is seeing any performance benefit from writing his own mod, it is not because he is getting more "down to the metal" than cljs.mod. Possibly the javascript vm was deoptimizing cljs.core.mod because it was being called with arguments of inconsistent types across the life of the program. By "cloning" the mod function and always calling it with small int arguments, the VM might have been able to optimize it better and more consistently. This is just speculation, though.
There is a lot in his article that is simply wrong. He appears in many cases to be applying java/clojure reasoning to javascript/clojurescript incorrectly (he is porting this application from Clojure after all). For example, his section on using multi-dimensional arrays is citing Java bytecode to argue that javascript multidimensional arrays will be faster, which is completely not true. (There are no multidim arrays in JS--perhaps some JS VM will optimize that case, but the only way to know is to measure.) In another place he says:
The real value here [of using numbers instead of keywords], is that it allows us to use an int-array, which performs roughly 10-12% faster than a non-typed array.
In clojurescript, int-array (and all the *-array and make-array functions) are aliases for a normal javascript array. In Clojure these produce Java primitive arrays of different types, but in Clojurescript they are all simply new Array(). He gets a boost in performance because he is removing the overhead of comparing keyword objects instead of numbers, and possibly because his vm is under-the-hood using a more compact array representation because it notices its full of small integers instead of pointers.
Optimizing javascript performance is extremely difficult. You must benchmark every change, in multiple browsers, using realistic inputs and call patterns. Different VMs will optimize different approaches differently and there are very few rules of thumb that always work. A good thing to read about optimizing js is Javascript performance for madmen, which really just gives you a taste of how unpredictable JS performance can be. You should also read David Nolen's post on optimizing clojurescript which Jensen cites (David Nolen is a lead Clojurescript developer and maintainer).
I have a (hypothetical) question and I think the solution would be to dynamically generate code.
I want to quickly evaluate an arbitrary mathematical function that a user has entered, say to find the sum i=1 to N of i^3+2i^2+6i+1. N is arbitrary and i^3+2i^2+6i+1 is arbitrary too (it need not be a polynomial, and it might contain trigonometric functions and other functions too). Suppose N can be very large. I want to know how I can evaluate the answer quickly, assuming that I have already parsed the user input to some bytecode or something else my program can understand.
If possible, I would also like my code to be easily compiled and run on different operating systems (including mobile).
I have thought of a few ways:
1) Write an interpreter that interprets and executes each command in my bytecode. This makes me free to use any language, but it's slow.
2) Write in Java/C# and use the dynamic code generation (e.g. Is it possible to dynamically compile and execute C# code fragments?). This would execute as as fast as if I had written the function directly in my source code, with a only a slight slowdown as C#/Java are both JIT-compiled to machine code. The limitation is that Java isn't widely supported on mobile, and C# is Windows-only.
3) Embed an assembler/C++ compiler/compiler for whatever compiled language that I use. The limitation is that it won't work on mobile either - it won't let me execute a data file.
4) Write HTML/Javascript then embed it in a web browser control and put it in an application (I think this is the way some people use to make a universal app that would run anywhere). But it's slow too and writing real applications in Javascript is a pain.
Which option do you think is most suitable? Or perhaps I should go with a mix, maybe my application code will create and execute a generated Javascript function?
The fastest and simplest way to perform these calculations on large values of N are with raw maths instead of repeated summation.
Here's a formula to calculate each individual item in the expression, perform this for all items in the expression and you are done:
H[n] is the nth Harmonic number.
There are multiple approaches to calculating H[n]. Some calculate the largest required number and generate all up to that number, saving any other values required...
Alternately store every 10,000th item in the series in a file and calculate H[n] from the nearest entry.
I've been playing around with Typed Arrays in JavaScript.
var buffer = new ArrayBuffer(16);
var int32View = new Int32Array(buffer);
I imagine normal arrays ([1, 257, true]) in JavaScript have poor performance because their values could be of any type, therefore, reaching an offset in memory is not trivial.
I originally thought that JavaScript array subscripts worked the same as objects (as they have many similarities), and were hash map based, requiring a hash based lookup. But I haven't found much credible information to confirm this.
So, I'd assume the reason why Typed Arrays perform so well is because they work like normal arrays in C, where they're always typed. Given the initial code example above, and wishing to get the 10th value in the typed array...
var value = int32View[10];
The type is Int32, so each value must consist of 32 bits or 4 bytes.
The subscript is 10.
So the location in memory of that value is <array offset> + (4 * 10), and then read 4 bytes to get the total value.
I basically just want to confirm my assumptions. Is my thoughts around this correct, and if not, please elaborate.
I checked out the V8 source to see if I could answer it myself, but my C is rusty and I'm not too familiar with C++.
Typed Arrays were designed by the WebGL standards committee, for performance reasons. Typically Javascript arrays are generic and can hold objects, other arrays and so on - and the elements are not necessarily sequential in memory, like they would be in C. WebGL requires buffers to be sequential in memory, because that's how the underlying C API expects them. If Typed Arrays are not used, passing an ordinary array to a WebGL function requires a lot of work: each element must be inspected, the type checked, and if it's the right thing (e.g. a float) then copy it out to a separate sequential C-like buffer, then pass that sequential buffer to the C API. Ouch - lots of work! For performance-sensitive WebGL applications this could cause a big drop in the framerate.
On the other hand, like you suggest in the question, Typed Arrays use a sequential C-like buffer already in their behind-the-scenes storage. When you write to a typed array, you are indeed assigning to a C-like array behind the scenes. For the purposes of WebGL, this means the buffer can be used directly by the corresponding C API.
Note your memory address calculation isn't quite enough: the browser must also bounds-check the array, to prevent out-of-range accesses. This has to happen with any kind of Javascript array, but in many cases clever Javascript engines can omit the check when it can prove the index value is already within bounds (such as looping from 0 to the length of the array). It also has to check the array index is really a number and not a string or something else! But it is in essence like you describe, using C-like addressing.
BUT... that's not all! In some cases clever Javascript engines can also deduce the type of ordinary Javascript arrays. In an engine like V8, if you make an ordinary Javascript array and only store floats in it, V8 may optimistically decide it's an array of floats and optimise the code it generates for that. The performance can then be equivalent to typed arrays. So typed arrays aren't actually necessary to reach maximum performance: just use arrays predictably (with every element the same type) and some engines can optimise for that as well.
So why do typed arrays still need to exist?
Optimisations like deducing the type of arrays is really complicated. If V8 deduces an ordinary array has only floats in it, then you store an object in an element, it has to de-optimise and regenerate code that makes the array generic again. It's quite an achievement that all this works transparently. Typed Arrays are much simpler: they're guaranteed to be one type, and you just can't store other things like objects in them.
Optimisations are never guaranteed to happen; you may store only floats in an ordinary array, but the engine may decide for various reasons not to optimise it.
The fact they're much simpler means other less-sophisticated javascript engines can easily implement them. They don't need all the advanced deoptimisation support.
Even with really advanced engines, proving optimisations can be used is extremely difficult and can sometimes be impossible. A typed array significantly simplifies the level of proof the engine needs to be able to optimise around it. A value returned from a typed array is certainly of a certain type, and engines can optimise for the result being that type. A value returned from an ordinary array could in theory have any type, and the engine may not be able to prove it will always have the same type result, and therefore generates less efficient code. Therefore code around a typed array is more easily optimised.
Typed arrays remove the opportunity to make a mistake. You just can't accidentally store an object and suddenly get far worse performance.
So, in short, ordinary arrays can in theory be equally fast as typed arrays. But typed arrays make it much easier to reach peak performance.
Yes, you are mostly correct. With a standard JavaScript array, the JavaScript engine has to assume that the data in the array is all objects. It can still store this as a C-like array/vector, where the access to the memory is still like you described. The problem is that the data is not the value, but something referencing that value (the object).
So, performing a[i] = b[i] + 2 requires the engine to:
access the object in b at index i;
check what type the object is;
extract the value out of the object;
add 2 to the value;
create a new object with the newly computed value from 4;
assign the new object from step 5 into a at index i.
With a typed array, the engine can:
access the value in b at index i (including placing it in a CPU register);
increment the value by 2;
assign the new object from step 2 into a at index i.
NOTE: These are not the exact steps a JavaScript engine will perform, as that depends on the code being compiled (including surrounding code) and the engine in question.
This allows the resulting computations to be much more efficient. Also, the typed arrays have a memory layout guarantee (arrays of n-byte values) and can thus be used to directly interface with data (audio, video, etc.).
When it comes to performance, things can change fast. As AshleysBrain says, it comes down to whether the VM can deduce that a normal array can be implemented as a typed array quickly and accurately. That depends on the particular optimizations of the particular JavaScript VM, and it can change in any new browser version.
This Chrome developer comment provides some guidance that worked as of June 2012:
Normal arrays can be as fast as typed arrays if you do a lot of sequential access. Random access outside the bounds of the array causes the array to grow.
Typed arrays are fast for access, but slow to be allocated. If you create temporary arrays frequently, avoid typed arrays. (Fixing this is possible, but it's low priority.)
Micro-benchmarks such as JSPerf are not reliable for real-world performance.
If I might elaborate on the last point, I've seen this phenomenon with Java for years. When you test the speed of a small piece of code by running it over and over again in isolation, the VM optimizes the heck out of it. It makes optimizations which only make sense for that specific test. Your benchmark can get a hundredfold speed improvement compared to running the same code inside another program, or compared to running it immediately after running several different tests that optimize the same code differently.
I'm not really contributor to any javascript engine, only had some readings on v8, so my answer might not be completely true:
Well values in arrays(only normal arrays with no holes/gaps, not sparse. Sparse arrays are treated as objects.) are all either pointers or a number with a fixed length(in v8 they are 32 bit, if a 31 bit integer then it's tagged with a 0 bit in the end, else it's a pointer).
So I don't think finding the memory location is any different than a typedArray, since the number of the bytes are the same all over the array. But the difference comes that if it's an a object, then you have to add one unboxing layer, which doesn't happen for normal typedArrays.
And ofcourse when accessing typedArrays, definitely doesn't have type checking's that a normal array have(though that might be remove in a higly optimized code, which is only generated for hot code).
For Writing, if it's the same type shouldn't be much slower. If it's a different type then the JS engine might generate polymorphic code for it, which is slower.
You can also try making some benchmarks on jsperf.com to confirm.
I am testing different methods to initialise a large javascript array with zeros. So far a simple for loop with push(0) seems to outperform the other approaches by far and large (see http://jsperf.com/initialise-array-with-zeros), but I am having doubts about the validity of this test.
In practice you would create such a large array only once and cache it, so that later when you need a large initialised array again you can simply slice it. Therefore I believe the most important evaluation is the time it takes the first time this code is executed, rather than an average over many trials.
Does anyone disagree? Or does anybody know how/where I can test the timings of only one round?
Edit: In response to some misconceptions as to the rationale of allocating an array with so many zeros I would like to clarify two things.
There will be no sparsity. I need to create more than one large array and use them for computations. These copies will be filled with floats and the chance for a float to be exactly zero is negligible.
Not all computations are performed sequentially over the array. I believe that a function that generates the array in the process would be inefficient compared to overwriting values in an array that is passed by reference (see e.g. gl-matrix.js).
My solution is therefore to create one large zero-filled array once and then take a slice() whenever a new array is needed, then pass that copy by reference to any function to work with it. Slice is super-duper-mega fast in any browser.
Now, although you may still have concerns why I want to do this, what I am really interested in is if it is at all possible to evaluate the performance of different initialisation methods at the first time run. I want to have this timing because in my situation I will certainly only run this once.
And yes, my jsperf code likely misses some solutions. So if you have an approach that I didn't think of, please feel free to add it! Thanks!
Testing the operation only once is very complicated, as the performance varies a lot depending on what else the computer is doing. You would have to run that single test a lot of times, and reset to the same conditions between each test. The reason that jsperf runs the test a lot of times is to get a good average to weed out the anomalies.
You should test this in different browsers, to see which method is the best overall. You will see that you get very varying results.
In Internet Explorer, the fastest methods is actually neither of the ones you tested, but a simple loop that assigns the zeroes:
for (var i = 0; i < numzeros; i++) zeros[i] = 0;
Starting ES6, you can use fill like:
var totals = [].fill.call({ length: 5 }, 0);
There's no practical task that would amount to "initialise javascript array with zeros", especially a big one. You should rethink why you need 0's there. Is this a sparse array and you need 0 as default value? Then just add a conditional on access to set retrieved value to 0 instead of wasting memory and initialization time.
I'm new to Javascript, and notice that you don't need to specify an array's size and often see people dynamically creating arrays one element at time. This would be a huge performance problem in other languages as you would constantly need to reallocate memory for the array as it increases in size.
Is this not a problem in JavaScript? If so, then is there a list structure available?
Javascript arrays are typically implemented as hashmaps (just like Javascript objects) with one added feature: there is an attribute length, which is one higher than the highest positive integer that has been used as a key. Nothing stops you from also using strings, floating-point numbers, even negative numbers as keys. Nothing except good sense.
It most likely depends on what JavaScript engine you use.
Internet Explorer uses a mix of sparse arrays and dense arrays to make that work. Some of the more gory details are explained here: http://blogs.msdn.com/b/jscript/archive/2008/04/08/performance-optimization-of-arrays-part-ii.aspx.
The thing about dynamic languages is, well, that they're dynamic. Just like ArrayList in Java, or arrays in Perl, PHP, and Python, an Array in JavaScript will allocate a certain amount of memory and when it gets to be too big, the language automatically appends to the object. Is it as efficient as C++ or even Java? No (C++ can run circles around even the best implementations of JS), but people aren't building Quake in JS (just yet).
It is actually better to think of them as HashMaps with some specialized methods too anyway -- after all, this is valid: var a = []; a['cat']='meow';.
No.
What JavaScript arrays are and aren't is determined by the language specification specifically section 15.4. Array is defined in terms of the operations it provides not implementation details of the memory layout of any particular data structure.
Could Array be implemented on top of a linked list? Yes. This might make certain operations faster such as shift and unshift efficient, but Array also is frequently accessed by index which is not efficient with linked lists.
It's also possible to get the best of both worlds without linked lists. Continguous memory data structures, such as circular queues have both efficient insertion/removal from the front and efficient random access.
In practice, most interpreters optimize dense arrays by using a data structure based around a resizable or reallocable array similar to a C++ vector or Java ArrayList.
Javascript arrays are not true arrays like in C/C++ or other languages. Therefore, they aren't as efficient, but they are arguably easier to use and do not throw out of bounds exceptions.
They are actually more like custom objects that use the properties as indexes.
Example:
var a = { "1": 1, "2": 2};
a.length = 2;
for(var i=0;i<a.length;i++)
console.log(a[i]);
a will behave almost like an array, and you can also call functions from the Array.prototype on it.