Stress testing my Foxx App eventually crashes ArangoDB with a SIGSEGV. Looking at the core file it seems to be related to V8 running out of memory. I'd like to do memory profiling on the heap to help track down potential leaks. Since the V8 engine is an integral part of arangod, how do I access and use the V8 profiler? The node modules that help with this all have C++ modules so they won't run right under Foxx.
Unfortunately, the V8 engine and its garbage collection has some glitches regarding the memory management.
In some cases, it runs in tight loops to squeeze a bit more memory out of the system, sometimes it instantly terminates the process instead of giving its host process a chance to cope with the situation.
This is a problem with which all V8 based solution have to fight - Node.JS too. The V8 team is working on this, and with every version they make progress.
But the end of the road isn't reached yet by now.
Regarding debugging interface which would most probably also provide the memory profiling, we are well aware of that its currently missing and tracking progress on this via the github issue #1538. As resources become available for this topic, we will start working on it.
You probably can use flamgegraphs in some way right now with the aid of the linux kernel, but it seems to be problematic to write the names of the JIT compiled functions required to make this more usefull.
Related
I recently started some web development, with ASP.NET and some Javascript, and something is confusing me alot.
I always read that JavaScript used to be interpreted until JIT slowly made it so chunks are compiled to machine code (which made browsers alot faster).
This makes no sense to me. How can JavaScript compile to native machine code, if traditional JavaScript apps don't target the machine/CPU to begin with?
I understand if an electron.js app gets compiled to machine code using the NodeJS runtime. That I get. Because it natively compiles to machine code and as far as I understand it, doesn't run in a browser.
If traditional JavaScript apps run in a browser, why must it be compiled to machine code? The browser is responsible for running the code, not the CPU. The CPU runs the browser itself. I actually don't see how the native OS can influence anything that happens in the browser at all or vise versa. Seems like a security issue as well.
Sorry if it's a stupid question, but I can't find any resource that will go beyond saying "Javascript uses JIT"
Thank you!
Lauren
At the end of the day, the CPU has to run the code.
JIT-compiling it down to machine code is one way to make that faster.
How can JavaScript compile to native machine code, if traditional JavaScript apps don't target the machine/CPU to begin with?
It is not "Javascript" that is doing it, it is the browser (or rather, the Javascript execution engine inside the browser), and since it is "JIT" it knowns exactly which CPU to target (this is not done in a generic way, this is done for the specific CPU that the browser is currently running on).
So, yes, there is some mismatch, since Javascript will not use low-level primitive types that the CPU can work with directly, which is why there is a lot of indirection and speculative type inference guess-work. The resulting machine code is much different than you would get from hand-coded assembly, but it can still be a net positive. To help with this, WASM was developed, which is closer to "normal" machine code.
Other intermediate, non-CPU specific formats like JVM bytecode or CLR bytecode or LLVM bitcode are in a similar situation (in that can also be compiled to machine code they do not themselves target directly) -- but they have been "lowered" already from language source code to something close to machine code.
Seems like a security issue as well.
Yes, it can be. The browser has to be careful in what it is doing here, and the OS should sandbox the browser as much as possible.
Executing instructions is easier than running an interpreter, and JIT seeks to take advantage of this for a performance boost. All programs running on your computer become machine code at some point, the only question is which instructions are be executed.
let x=0;
for (let i=0;i<100;++i) {
x+=2;
}
Since it is clear that there are no side effects in a block of code like this, it is faster to compile instructions directly, rather than interpreting each line.
// NIOS 2 assembly, sorry its the only one i know
movi r2,0
movi r3,0
movi r4,100
loop:
addi r2,2
addi r3,1
blt r3,r4,loop
Executing this will be faster than executing the parsing logic for each individual instruction.
TLDR: All programs are always running CPU instructions, so it is faster to minimize the number of instructions by skipping the parsing stage when possible
My Sencha Touch app (demo is here: http://www.bodbot.com/MobileApp/senchademo/index.html) has been crashing on a relatively regular basis on Android and Windows Phone. I have yet to figure out the root cause of the crash despite a fair amount of investigation, so any help would be awesome. Here's what I have so far:
On Android, when the app crashes, I get a signal 11 sigsegv error. Due to the fact that I am working pretty much exclusively in javascript, my assumption is that the likely cause of this segmentation fault would be some sort of a memory leak, since I'm not writing any code that's pointing at anything in memory.
When I use Chrome's timeline memory analysis and use the app very heavily, the memory usage pattern does seem to clearly indicate a memory leak, especially when compared to similar usage on one of Sencha Touch's demo apps. (screenshots below)
My problem is that I can't track down the (assumed) memory leak. I'm doing everything that I've found about optimizing sencha for memory:
I'm using listener delegates pretty much exclusively
I make sure that most components that aren't currently being viewed are destroyed
I don't go too nuts on the global variables within javascript
It does look like Chrome's "Record Heap Allocations" might be able to reveal something, but given the sheer volume of pieces that it's tracking, I'm having real trouble making any sense of it.
Am I missing a method for optimizing memory in Sencha Touch? Is there a more effective way than Chrome's Record Heap Allocations for tracking down Sencha Touch app memory leaks?
Screenshots:
I just finished writing tests for JavaScript application and I was using Jasmine for the first time. Everything works fine, but I still need to test if application has some memory leaks within. Is it even possible to programmatically check it within my specs? Maybe there is some additional library for this?
Chrome has a non-standard extension of the window.performance API -- (window.performance.memory), where you can measure memory usage.
In order to enable precise memory statistics, you must use this flag: --enable-precise-memory-info
But you also need to force GC to tell whether memory is retained after your test. Because CG doesn't happen instantly.
With Chromium browser, you can run it with a special command flag to expose a method to force GC:
chromium-browser --js-flags='--expose_gc'
This gets you access to the method window.gc().
As I know there is no automatic way to find source of javascript memory leak. Javascript memory leaks is realy nasty thing on which you can waste a lot of time. Recently I was developing very large enterprise web solution as a single page application with almost 1mb of minimized self-written code. Suddenly we realized that our application is leaking hard. I tryied hundreds of technics to find the source of memory leak and the easiest way for me is to use google chrome profiler, take heap snapshot and compare different heap snapshots. Here is more information how to do it :
https://developer.chrome.com/devtools/docs/javascript-memory-profiling
Have a nice week with debugging memory leaks in your app, hope it will take less time that in my case. :)
CouchDB ships with a default JS query server, couchJS, which is in charge of interpreting JS views (and filters, and shows) and seems to be a version of Mozilla SpiderMonkey. The one shipping with CouchDB 1.0.1 seems to be SpiderMonkey 1.8.5, if you look at the strings within the binary. However, there are other (many, in fact) JS engines out there, from V8 to JägerMonkey, which might offer (or maybe not) better performance, at least with complicated views or filters.
Has anybody tried that? Would it be worth the while? (Maybe the first question would be would they work? and even have you tried it yourself?, but, hey, I can do it if nobody has, don't want to waste my time)
CouchDB links against SpiderMonkey, so CouchDB 1.0.1 might run with any of a large variety of SpiderMonkey releases. (Similarly, your browser might run one of many releases of the Java or Flash plugin.)
I maintain Build-CouchDB and that does build a pretty recent SpiderMonkey, for presumed tracing JIT improvements; however I have never seen a benchmark.
The general consensus is that the JavaScript VM execution speed is not the bottleneck for CouchDB and so making it faster would not make CouchDB appreciably faster.
Is it possible to run JavaScript code in parallel in the browser? I'm willing to sacrifice some browser support (IE, Opera, anything else) to gain some edge here.
If you don't have to manipulate the dom, you could use webworkers ... there's a few other restrictions but check it out # http://ejohn.org/blog/web-workers/
Parallel.js of parallel.js.org (see also github source) is a single file JS library that has a nice API for multithreaded processing in JavaScript. It runs both in web browsers and in Node.js.
Perhaps it would be better to recode your JavaScript in something that generally runs faster, rather than trying to speed up the Javascript by going parallel. (I expect you'll find the cost of forking parallel JavaScript activities is pretty high, too, and that may well wipe out any possible parallel gain; this is common problem with parallel programming).
Javascript is interpreted in most browsers IIRC, and it is dynamic on top of it which means it, well, runs slowly.
I'm under the impression you can write Java code and run it under browser plugins. Java is type safe and JIT compiles to machine code. I'd expect that any big computation done in Javascript would run a lot faster in Java. I'm not specifically suggesting Java; any compiled language for which you can get a plug in would do.
As an alternative, Google provides Closure, a JavaScript compiler. It is claimed to be a compiler, but looks like an optimizer to me and I don't know much it "optimizes". But, perhaps you can use that. I'd expect the Closure compiler to be built into Chrome (but I don't know for a fact) and maybe just running Chrome would get your JavaScript compiler "for free".
EDIT: After reading about what about Closure does, as compiler guy I'm not much impressed. It looks like much of the emphasis is on reducing code size which minimizes download time but not necessarily performance. The one good thing they do in function inlining. I doubt that will help as much as switching to a truly compiled langauge.
EDIT2: Apparantly the "Closure" compiler is different than the engine than runs JavaScript in Chrome. I'm told, but don't know this for a fact, that the Chrome engine has a real compiler.
Intel is coming up with an open-source project codenamed River Trail check out http://www.theregister.co.uk/2011/09/17/intel_parallel_javascript/