JS on site behaves differently; browsers, os are same - javascript

We are debugging a site; these are the conditions:
Both devs are on same OS, same version (Windows 10)
Both devs are using same browser, same version (Chrome)
Since our dev boxes where built out identically (corporate america), there are no differences in other elements. JS version, so on.
I visit the site, JS fires differently than when co-worker visits.
What is going on? Are the End Times coming?

Read this It may be helpful
Browser incompatibilities
When a user receives a page which includes JavaScript, the JavaScript interpreter of his browser kicks in and tries to execute the script. Now the main problem here is that the various browsers each use their own interpreter, and that sometimes browser vendors have chosen not to implement a bit of JavaScript. Their reasons were usually related to business advantage over the competitors.
Hence the feared browser incompatibilities.
In addition each new browser version understands more JavaScript and allows more and more parts of the HTML page to be changed by scripts. This leads to even more incompatibilities.

Related

Run compiled files on Google Native Client

How to run compiled files directly using Google Native Client (PNaCl)? It tried checking their documentation. It said that -
Native Client is a sandbox for running compiled C and C++ code in the browser efficiently and securely, independent of the user’s operating system.
But in their documentation, they only deal with sources of the application. Is there any way to run compiled code directly? I want to run files with .exe and .deb extensions
I'm not limiting the answer to Native Client. Any mechanism which can do that sort of work will work for me.
You can't run pre-compiled code within NaCl or PNaCl. You have to use the compilers provided by the SDK. There are three main reasons for this:
NaCl is an execution sandbox which relies on crafting machine code (x86-32, x86-64, ARM, MIPS) in a very particular way. This is regular machine code from the CPU's point of view, but allows the sandbox to run a validator and make sure that the code can't do anything malicious. This is called Software Fault Isolation, and is explained in this paper. The other ISA sandboxes are also documented.
PNaCl targets NaCl, but is an architecture-agnostic intermediate representation. This means that you ship what can be thought of as bytecode, and the browser figures out which type of machine code (x86-32, x86-64, ARM, MIPS) to generate based on the user's machine. The developer doesn't generate 4 binaries.
In both above cases, the code can execute as-is on Windows, MacOSX, Linux, ChromeOS, and (while not usually shipping) Android. This means that the NaCl sandbox presents itself as an operating system, and offers the same APIs. These APIs are different from other OSes, though they're pretty close to POSIX especially if you use nacl_io.
The above points require that you use the compilers provided by the SDK.
It is technically possible to run binaries built for other architectures or operating systems since the system is Turing-complete. That's what QEMU does, what Rosetta did, what Transmeta did, and what the Android Runtime for Chome (ARC) enables. This usually requires binary translation and emulation of all operating system calls. This is technically difficult to implement, and often has severe performance cost. I do not recommend exploring this option.
As #JFBastien pointed out, emulation is the only option to execute pre-compiled native code in the browser environment. But it is an option nevertheless. Depending on your demands on performance, it might even be a viable option.
Click here for example to boot up an emulator running Windows (a very old version though) in your browser.
From the menu pick, for example, notepad.exe (using the cursor down key on your keyboard) and hit enter. There you have it: an unmodified, precompiled, native notepad.exe running inside your browser! (and probably even faster than back in the day when this OS was new).
There are a lot of emulators written in Javascript all around the web. Running a small Linux distribution with usable performance and even with networking(!), graphics and sound is actually possible. Check out the OpenRISC emulator. You can even run an ssh daemon and log into it from your local machine!

Does chrome understand compiled javascript?

Instead of having V8 compile JavaScript on the fly and then execute it, isn't it possible to just compile the JavaScript beforehand and then embed the machine code in the page instead of embedding JavaScript in the page?
There are two main problems with shipping machine code on the web:
Portability. No server can afford providing appropriate machine code for all possible system architectures out there (present and future). E.g., V8 already supports 10 different CPU architectures.
Security. No client can afford to run random machine code on their machine without knowing if it can be trusted.
To address (1) you'd generally need to cross-compile machine code, which is more difficult and costly than compiling down from a high-level language. To address (2), you'd need to validate the machine code you receive, which is more difficult and costly than compiling a high-level language.
Machine code also tends to be much larger than high-level code, so there is a bandwidth issue as well.
Now, JavaScript may not be a particularly great choice of high-level language. But it is what we are stuck with as the language of the web.
The way I understand it, the V8 JavaScript engine compiles to machine code anyway so why not just do it beforehand?
According to the W3C HTML5 Scripting specification, there's no standards-based reason why a browser couldn't support machine code with special type attributes (as Chrome does with the Dart language):
The following lists the MIME type strings that user agents must recognize, and the languages to which they refer:
"application/ecmascript"
"application/javascript"
...
User agents may support other MIME types for other languages...
Currently, no browser has implemented such a feature.
I suspect the primary shortcoming of such an approach is that each chip architecture would require a machine-code version of the script compiled for it specifically. This means that in order to support three architectures, a page would need to include a compiled script three times. (And it should be included a fourth time, as plain JavaScript, as a fallback for architectures that you didn't include, or for browsers that can't/don't support compiled code.) This could significantly bloat the size of the page with data that is mostly useless. The increase in load time would seem to significantly offset or completely outweigh whatever time you save on compilation.
An architecture-independent compromise solution like bytecode seems pretty poor: you still need to include the script twice (once for the bytecode, once normally for scripts that don't support it) and you need to do some kind of run-time processing on the bytecode to turn it into machine code.
The multiple-includes-with-fallback problem is exactly why other scripting languages have not made it into the Web environment: they would need coordinated cross-vendor support to be useful. Google is trying with Dart, but it remain to be seen what degree of success they see.
Note that Chrome does cache compiled versions of scripts so a script only needs to be compiled once and then the compiled code is cached for reuse when the user re-visits the page.

How do browsers execute javascript

I'm trying to figure out if web browsers use an interpreter to execute javascript, or some sort of compiler. It is well known that scripting languages are interpreted not compiled; however there is the JScriptCompiler that can compile javascript into MSIL. This leaves me to wonder if IE, FF, Chrome etc are using some sort of compiler or if it's an interpreter.
Can anyone cite the specific method in which browsers run javascript?
In the past, Javascript was interpreted -- and nothing more.
In the past two years or so, browsers have been implementing new Javascript engines, trying to compile some portions of code, to speed Javascript up.
For more informations on what has been done for Mozilla Firefox, you should take a look at :
JavaScript:TraceMonkey
an overview of TraceMonkey
For more informations about Chrome's engine, you'll want to read :
Dynamic Machine Code Generation
And for webkit (safari) :
Announcing SquirrelFish
Not sure what has been (or is being) done on other browsers -- but I suppose the same kind of thing exists, or will exist.
And, of course, for more informations : JavaScript engine, on wikipedia.
Heres' for IE
http://blogs.msdn.com/b/ie/archive/2010/03/18/the-new-javascript-engine-in-internet-explorer-9.aspx
And here's FireFox:
https://hacks.mozilla.org/2009/07/tracemonkey-overview/
(thanks to Pascal MARTIN)
JScript is a scripting language provided by microsoft. Its compilation is taken care by CLR.
Also it can be interpreted. It have tighter integration with Visual studio.
Have a look at http://msdn.microsoft.com/en-us/library/72bd815a%28v=vs.80%29.aspx for detail Jscript description.
javascript scripts are usually interpreted in web browsers (not sure about chrome and V8), but here and there you can find some standalone software which can compile it more or less correctly. This language isn't as fast as many other and his speed and functionality depends on browsers engine.

How viruses get through browser to pc as JavaScript do not have much privileges?

I would like to know how browser allow viruses to pass through to our computers. Response we receive is a text response.. Only executable thing in the response is JavaScript which does not have much privileges, what makes browser favor certain files to be passed to computer?
The short list:
Browser plugins. ActiveX* in general and Flash in particular are notorious for having holes.
Buffer overflows. Forming either HTML pages or Javascript in a specific way can lead to being able to write anything you want into memory... which can then lead to remote execution.
Other errors. I recall bugs in the past where the browser could be tricked into downloading files into a known location, then execute them.
*Google is working on expanding this particular kind of hole to other browsers with Native Client.
Things like ActiveX controls allow native code to be executed on local machines with essentially full privileges. Most viruses propagate through known security holes in unpatched browsers and don't use Javascript directly.
Browser bugs and misconfiguration can allow sites that should be in the "Internet" (secure) security zone execute code as if they were trusted. They can then use ActiveX components to install malware.
Exploiting software bugs. Commonly, when rendering images, interpreting html/css/javascript, loading ActiveX components or Flash files.
Once a bug is exploited, the procedure is to inject "shell code" (a chunk of native compiled code), into the process memory to get executed.

Does the browser "cache" javascript code?

I was fine tuning a page that is heavy on jquery and stumbled across this website:
http://www.componenthouse.com/extra/jquery-analysis.html
When I click on the "Run Test" button the first time, the numbers are way higher than subsequent clicks. Is this because JS is cached by the browser? Can someone explain how this works internally? Can a user choose to not cache the JS?
External javascript files are cached and, of course, an html containing script tags can be cached too.
What you see may be a result of html caching or some browser optimization. You should try different browsers, closing and re-opening your browser and clearing the cache of the browser.
The numbers are (significantly) different for me on the second time in Firefox 3.5. OTOH, they are fairly consistent(ly slow) in IE 8. Firefox 3.5's JavaScript interpreter compiles the JS to executable code. So it does make sense that the first time is slower; the code hasn't been JITted yet.
The performance boost you're seeing is likely due to your javascript interpreter. Most newer web browsers use a JIT-compiling javascript engine so code paths taken multiple times can be optimized.
Read this blog post on how Safari's javascript engine achieved many of its speed-ups.
Whether or not JavaScript code is cached, execution performance isn't affected. What you are seeing is jQuery caching the results for the selector queries so they don't take as long on subsequent runs.

Categories