I've ported an Excel retirement calculator into Javascript. There are 35 worksheets in the original Excel containing many recursive calculations, all of which I've converted to Javascript. Javascript is running slower (1-2 seconds compared to Excel's instantaneous).
I am caching the recursive calculations already to speed things up and prevent stack overflows in the browser.
Is it realistic to try and make the the Javascript faster?
How does Excel manage to be so efficient?
I read somewhere that Excel only re-calculates when a cell's precedents have been modified. Even still, it seems to me that Excel is pretty much instantaneous no matter how much data needs to be re-calculated.
Excel is faster because it's a few layers closer to the CPU -- running compiled bytecode within the OS, rather than within a browser running interpreted JavaScript.
I'd compare performance with Google Chrome or FF3 that have a new generation of JavaScript engines and see how things improve. See John Resig's post: http://ejohn.org/blog/javascript-performance-rundown/.
JavaScript is slower than any compiled language out there, that's why Excel is so much faster. I would use Firebug's profiler to figure out where your code is spending most of its time and focus on improving that.
If you've ported the Excel formulas to JavaScript while preserving the algorithms intact, the JavaScript code that you've ended up with may not be the most ideal for JavaScript. Have you considered refactoring things to take advantage of JavaScript's powerful language features?
Also, are you rendering things (updating table cells etc.) while doing the calculations? Keep in mind that some DHTML updates may put a big burden on the browser (hey, you're running this inside a browser, right?) Perhaps separating the calculation and rendering may help. You'd first busily do all the calculations and then do the presentation as the final step.
Like other people have said, JavaScript is nowhere near as fast as a compiled language. Currently, there's somewhat of an arms race between Chrome, Firefox and Webkit's JavaScript interpreters, which has really improved the speed situation with JavaScript. However, it's still pretty slow, and if you're using IE7 (or even worse IE6), performance can be pretty dismal.
You may want to look at some of the JavaScript libraries that are out there (personally, I prefer jQuery) to see if some of them have utility functions that you could take advantage of. Some of the more heavily used JavaScript libraries may have optimized some of the work that you're trying to do. It certainly won't make JavaScript as fast as Excel, but if you can replace a lot of your functionality with utilities that have been optimized by many different people, you could see a little bit of a speed increase.
Related
Javascript engine is usually used to transform bytecode from source code.then, the bytecode transforms to native code.
1) Why transformed bytecode ?? source code directly transforming native code is poor performance ?
2) If source code is very simple (ex. a+b function), source code directly transforming native code is good ?
Complexity and portability.
Transforming from source code to and kind of object code, whether it's bytecode for a virtual machine or machine code for a real machine, is a complex process. Bytecode more closely mimics what most real machines do, and so it's easier to work with: better for optimizing the code to run faster, transforming to machine code for an even bigger boost, or even turning into other formats if the situation calls for it.
Because of this, it usually turns out to be easier to write a front end whose only job is to transform the source code to bytecode (or some other intermediate language), and then a back end that works on the intermediate language: optimizes it, outputs machine code, and all that jazz. More traditional compilers for languages like C have done this for a long time. Java could be considered an unusual application of this principle: its build process usually stops with the intermediate representation (i.e. Java bytecode), and then developers ship that out, so that the JVM can "finish the job" when the user runs it.
There are two big benefits to working this way, aside from making the code easier to work with. The first big advantage is that you can reuse the backend to work with other languages. This doesn't matter so much for JavaScript (which doesn't have a standardized backend), but it's how projects like LLVM and GCC eventually grow to cover so many different languages. Writing the frontend is hard work, but let's say I made, for example, a Lua frontend for Mozilla's JavaScript backend. Then I could tap into all of the optimization work that Mozilla had put into that backend. This saves me a lot of work.
The other big advantage is that you can reuse the frontend to work with more machines. This one does have practical implications for JavaScript. If I were to write a JavaScript interpreter, I'd probably write my first backend for x86 -the architecture most PCs use- because that's where I'd probably be doing the development work. But most cell phones don't use an x86-based architecture -ARM is more common these days- so if I wanted to run fast on cell phones, I'd need to add an ARM backend. But I could do that, without having to rewrite the whole frontend, so once again, I've saved myself a lot of work. If I wanted to run on the Wii U (or the previous generation of game consoles, or older Macs) then I'd need a POWER backend, but again, I could do that without rewriting the frontend.
The bottom line is that while it seems more complex to do two transformations, in the long run it actually turns out to be easier. This is one of those strange and unintuitive things that pops up sometimes in software design, but the benefits are real.
I was reading a ebook about web technologies and I found this.
JavaScript is a language in its own right (theoretically it isn't tied
to web development), it's supported by most web clients under any
platform, and it has some object-oriented capabilities. JavaScript is
not a compiled language so it's not suited for intensive calculations
or writing device drivers and it must arrive in one piece at the
client browser to be interpreted so it is not secure either, but it
does a good job when used in web pages.
Here my problem is why we can't use JavaScript for process intensive calculations? It doesn't describe in the book. However, I have use JavaScript for mobile applications too, In some we have done very large calculation. How non-compile languages effect on this?
Two parts to this.
In a non-compiled language, you have to take a hit to compile or interpret it. Optimisation can reduce the cost of that, ie cache the result of the compilation, though of course that introduces complexity and uses up memory.
The other side is after a program is compiled, the result can be tweaked and specifically optimised for a particular purpose.
You have to consider the context though, one calculation to isolate a particular Calibi-Yau space was estimated to need 4 years to complete on the best super computer available at the time. So your definition of big and the guy who wrote the article might not be comparable. Course they could be one of those micro-optimisation types...
With modern compiler/interpreters and the most optimised code you can write, has to be a real edge case for this to be significant, and pre-compiled code is pretty much a given in those scenarios.
How bad is it, to use JavaScript (CoffeeScript) for implementing a heavy computational task? I am concerned with an optimization problem, where an optimal solution cannot be computed that fast.
JavaScript was chosen in the first place, because visualization is required and instead of adding the overhead for communication between different processes the decision was to just implement everything in JavaScript.
I don't see a problem with that, especially when looking at the benchmarks game. But I often receive the question: Why on earth JavaScript?
I would argue in the following way: It is an optimization problem, NP-hard. It does not matter how much faster another language would be, since this only adds a constant factor to the running time - is that true?
Brendan Eich (Mozilla's CTO and creator of JavaScript) seems to think so.
http://brendaneich.com/2011/09/capitoljs-rivertrail/
I took time away from the Mozilla all-hands last week to help out on-stage at the Intel Developer Forum with the introduction of RiverTrail, Intel’s technology demonstrator for Parallel JS — JavaScript utilizing multicore (CPU) and ultimately graphics (GPU) parallel processing power, without shared memory threads (which suck).
See especially his demo of JS creating a scene graph:
Here is my screencast of the demo. Alas, since RiverTrail currently targets the CPU and its short vector unit (SSE4), and my screencast software uses the same parallel hardware, the frame rate is not what it should be. But not to worry, we’re working on GPU targeting too.
At CapitolJS and without ScreenFlow running, I saw frame rates above 35 for the Parallel demo, compared to 3 or 2 for Sequential.
If JavaScript is working for you and meeting your requirements, what do you care what other people think?
One way to answer the question would be to benchmark it against an implementation in a "good" language (your terms, not mine) and see how much of a difference it makes.
I don't buy the visualization argument. If your "good" language implementation was communicating with a front end you might be able to have faster performance and visualization. You might be overstating the cost of communication to make yourself feel better.
I also don't like your last argument. JavaScript is single threaded; another language might offer parallelism that JavaScript can't. Algorithm can make a huge difference; perhaps you've settled on one that is far from optimal.
I can tell you that no one in their right mind would consider using JavaScript for computationally intensive tasks like scientific computing. SO did have a reference to a JavaScript linear algebra library, but I doubt that it could be used for analysis of non-linear systems will millions of degrees of freedom. I don't know what kind of optimization problem you're dealing with.
With that said, I'd wonder if it's possible to treat this question fairly in a forum like this. It could lead to a lot of back and forth and argument.
Are you seeking a justification for your views or do you want alternatives? It's hard to tell.
Well it's not exactly constant time it's usually measured X times slower than Java. But, as you can see from your results for the benchmark shootout it really depends on the algorithm as to how much slower it is. This is V8 javascript so it's going to depend on the browser you are running it in as how much slower. V8 is the top performer here, but it can dramatically run slower on other VMs: ~2x-10x.
If your problem can be subdivided into parallel processors then the new Workers API can dramatically improve performance of Javascript. So it's not single threaded access anymore, and it can be really fast.
Visualization can be done from the server or from the client. If you think lots of people are going to executing your program at once you might not want to run it on the server. If one of these eats up that much processors think what 1000 of them would do to your server. With Javascript you do get a cheap parallel processor by federating all browsers. But, as far as visualization goes it could be done on the server and sent to the client as it works. It's just what you think is easier.
The only way to answer this question is to measure and evaluate those measurements as every problem and application has different needs. There is no absolute answer that covers all situations.
If you implement your app/algorithm in javascript, profile that javascript to find out where the performance bottlenecks are and optimize them as much as possible and it's still too slow for your application, then you need a different approach.
If, on the other hand, you already know that this is a massively time draining problem and even in the fastest language possible, it will still be a meaningful bottleneck to the performance of your application, then you already know that javascript is not the best choice as it will seldom (if ever) be the fastest executing option. In that case, you need to figure out if the communication between some sort of native code implementation and the browser is feasible and will perform well enough and go from there.
As for NP-hard vs. a constant factor, I think you're fooling yourself. NP-hard means you need to first make the algorithm as smart as possible so you've reduced the computation to the smallest/fastest possible problem. But, even then, the constant factor can still be massively meaningful to your application. A constant factor could easily be 2x or even 10x which would still be very meaningful even though constant. Imagine the NP-hard part was 20 seconds in native code and the constant factor for javascript was 10x slower. Now you're looking at 20 sec vs. 200 sec. That's probably the difference between something that might work for a user and something that might not.
I've started work recently at a new company and they have an existing application with 1000s of lines of Javascript code. The baseline contains dozens of JS files with easily over 10,000 custom lines of code, they also use multiple 3rd party libraries such as Jquery, Livequery, JQTransform and others. One of the major complaints they have been receiving from users is the slowness of the client side operation of the site. I've been tasked with optimizing and improving the performance of the JS. My first step will be obviously to move forward to the newest Jquery library, and incorporate JSMin into the build process. Other than that I'm wondering if anyone has some tips on where to begin with optimization on such a huge code base?
You could try installing DynaTrace Ajax Edition (free download here) and see what that tells you. It supports only IE8 I think, but that's probably as good a place to start as any. It's got a much more thorough and understandable profiler interface than do either Firebug or Chrome, in my opinion.
One thing that jumps out at me is "Livequery", which if not used very carefully can cause gigantic performance problems.
Remember this: in a code base that big, developed over time and possibly not with the most "modern" Javascript techniques available, your real problems are going to be bad algorithms in your own code. Newer libraries and minification/optimization methods are good ideas, but the first thing you need to do is find the pages that seem sluggish and then start profiling. In my experience, in a big old codebase like that, you'll find something terrible really quickly. Install a desktop gadget that tracks CPU utilization. That's a great way to see when page code is causing the browser to slow down directly, and not just network lag. Any big spike in browser CPU usage for any significant amount of time should be a big red flag.
Profile that code. Don't optimize something if you just "feel" it could be optimized. Remember the 80% 20% rule. 80% of time is spent in 20% of code.
Use Google's Closure tools. They can optimize and reduce your JS code, which will at least cause it to load faster on your client's computers.
The way to go is to find bottlenecks. If you find the actual situtation where the app is slow, you can use Firebug to profile your code and tell how much time spent on every function and how many times they have been called. From this information it's pretty easy to determine what areas need some improvement.
Generally the bottlenecks of a webapplication are:
Working with the DOM extensively (repaints, reflows)
Heavy network communication (AJAX)
You have a long road ahead of you mate, and I dont envy you.
Here are some Performance Optimization Techniques for Javascript that I wrote down after working in a similar role as yours recently.
They are broken down into 5 broad categories in order of the performance difference they make.
However given what you said about the codebase, I think the second section on Managing and Actively reducing your Dependencies is the most relevant, particularly:
Modifying code to reduce library dependencies, and
Using a post-load dependency manager for your libraries and modules
However all 25 techniques listed there are useful for improving performance.
I hope that you find them useful.
I'm going to develop a comprehensive educational software which runs on the browser and has many visualization and simulation works (electrostatic and electromagnetic visualization, 2D and 3D).
Which language(Processing, javascript or something else) is the best toward my purpose?
The question is indeed broad but I will answer from the experience I've had.
Javascript is not really meant to do mathematical calculations, which is what might be necessary to calculate a lot of E&M phenomenon quickly (Especially if they are not represented as a closed form solution). It really goes into how much detail you want in your graphs as well (More steps = more calculations). You may find yourself needing to do more optimizations to make up for the performance difference.
I did some visualizations of antenna arrays (They had closed form solutions, only simple arrays) in Flash and it worked out ok. Javascript will definitely not be up to par with any 3D simulations you might want to do.
I wonder if Silverlight might be a better solution, because you may find more mathematics libraries for .NET than for Actionscript, that could save you a lot of work of writing the math out yourself (But you might end up doing this anyways because of the performance issues).
As others have suggested javascript is not that strong of a language when it comes to visualization.
Processing is a really good language for what you're trying to do, it's easy to learn and is Java based. Data visualization is built directly into the language, as well as temporal space (ie advance "1 tick" in time and have the visualization react to that.)
Also if you're interested in going that route I'd suggest picking up Visualizing Data which is pretty much a Processing primer.
Flash may be the more common application stack right now for what you are looking for, but Silverlight is looking primed to take the title from them based on the powerful features that it contains.
I would go Flex or Silverlight myself
Plenty of re-usable libraries
Native support for multimedia
Native support for graphics and animation
I'm a little late to the show, but what you want, has been implemented in JavaScript, and you'll find this incredibly useful. I recommend running it under Chrome as the JS processing engine is extremely fast. (You may even want to try Chrome 2 which is even faster)
http://ejohn.org/blog/processingjs/
http://ejohn.org/apps/processing.js/examples/basic/ (91 basic demos.)
http://ejohn.org/apps/processing.js/examples/topics/ (51 larger, topical, demos.)
http://ejohn.org/apps/processing.js/examples/custom/ (4 custom "in the wild" demos.)
See also: http://www.chromeexperiments.com/
I second LFSR Consulting's opinion: Processing is used a lot for educational purposes, it is free, and fast (Java is faster than Flash in general) and easy to learn, so you have faster results. It supports 3D, you can tackle Java libraries for simulation and computing, etc. And it has a great community! :-)
JavaScript is a bit light for such usage. JavaFX is hype, but it hasn't really 3D (although one used Java3D with it) and it is still a bit young.
Flash and Silverlight: no comment, not much experience in the field. OpenLazlo can be an alternative...
You really have two choices ActionScript in Flash or VB.NET/C#/other in Silverlight.
So first you need to decide which of these platforms you will target.
You may be able to split the problem into two parts, the user-interaction and display part, and the heavy calculations part.
If you can move the heavy calculations to a server then you can still show everything in javascript.
One difficulty with javascript is that it is interpreted and you will need to write more of the equations yourself, so there is a performance hit and development time, but it will work without any plugins, unless you don't want to do 3D in the canvas tag.
Flash and Silverlight may have better options, but then you are learning new languages and requiring plugins, depending on what version of Flash you want to use.
Check out processing.js, xcode, and iprocessing!
ProcessingJS is great for data visualization but lacks in interactivity.
You should probably try python. It is a really good language for educational and computational purposes it has a pretty decent community and the syntax is not so tough. Even though it was designed to for command line you can create front end gui's for it using some external package and it also provides packages like Scipy, Numpy and Matplotlib for advanced plotting and data visualization.