How to run compiled files directly using Google Native Client (PNaCl)? It tried checking their documentation. It said that -
Native Client is a sandbox for running compiled C and C++ code in the browser efficiently and securely, independent of the user’s operating system.
But in their documentation, they only deal with sources of the application. Is there any way to run compiled code directly? I want to run files with .exe and .deb extensions
I'm not limiting the answer to Native Client. Any mechanism which can do that sort of work will work for me.
You can't run pre-compiled code within NaCl or PNaCl. You have to use the compilers provided by the SDK. There are three main reasons for this:
NaCl is an execution sandbox which relies on crafting machine code (x86-32, x86-64, ARM, MIPS) in a very particular way. This is regular machine code from the CPU's point of view, but allows the sandbox to run a validator and make sure that the code can't do anything malicious. This is called Software Fault Isolation, and is explained in this paper. The other ISA sandboxes are also documented.
PNaCl targets NaCl, but is an architecture-agnostic intermediate representation. This means that you ship what can be thought of as bytecode, and the browser figures out which type of machine code (x86-32, x86-64, ARM, MIPS) to generate based on the user's machine. The developer doesn't generate 4 binaries.
In both above cases, the code can execute as-is on Windows, MacOSX, Linux, ChromeOS, and (while not usually shipping) Android. This means that the NaCl sandbox presents itself as an operating system, and offers the same APIs. These APIs are different from other OSes, though they're pretty close to POSIX especially if you use nacl_io.
The above points require that you use the compilers provided by the SDK.
It is technically possible to run binaries built for other architectures or operating systems since the system is Turing-complete. That's what QEMU does, what Rosetta did, what Transmeta did, and what the Android Runtime for Chome (ARC) enables. This usually requires binary translation and emulation of all operating system calls. This is technically difficult to implement, and often has severe performance cost. I do not recommend exploring this option.
As #JFBastien pointed out, emulation is the only option to execute pre-compiled native code in the browser environment. But it is an option nevertheless. Depending on your demands on performance, it might even be a viable option.
Click here for example to boot up an emulator running Windows (a very old version though) in your browser.
From the menu pick, for example, notepad.exe (using the cursor down key on your keyboard) and hit enter. There you have it: an unmodified, precompiled, native notepad.exe running inside your browser! (and probably even faster than back in the day when this OS was new).
There are a lot of emulators written in Javascript all around the web. Running a small Linux distribution with usable performance and even with networking(!), graphics and sound is actually possible. Check out the OpenRISC emulator. You can even run an ssh daemon and log into it from your local machine!
Related
I'm beginner to MEAN stack, while studying NodeJS, I'm came up with the following statement that's taking my mind
Node.js is a very powerful JavaScript-based framework/platform built
on Google Chrome's JavaScript V8 Engine.
but what exactly does it mean by
built on Google Chrome's JavaScript V8 Engine.
and if it's built on Chrome's JS V8 Engine, why does it works on Firefox as well?
MEAN stack, reorganized from back to front:
MongoDB: data persistence, stores data for later retrieval
Node.js: web application server, responds to requests from clients
Express: web application framework, reduces Node boilerplate
Angular.js: browser framework
So Node.js does not "work on Firefox" (it doesn't work on Google Chrome either): its a server-side technology. Think of it as a replacement for Python/Ruby/Java in that role. So it can/does respond to requests from all sorts of clients (like Google Chrome and Firefox).
What the "built on V8" means is that it uses the same JavaScript interpreter/just-in-time compiler as Google Chrome. But the similarities with chrome pretty much stop there: Node has no rendering engine/css parser/DOM but does have things you need in a server like an HTTP library and a filesystem API.
Also, and I mean no offence: we all started where you are, the fact that you are even asking the question (which again is not a bad thing!) means that building on a stack like MEAN is over your head. The documentation is going to assume that you know things you seem to not know. I strongly recommend furthering your understanding of JavaScript and Node through some tutorials and barebones test apps before trying to throw databases and frameworks into the mix.
In order for a programming language to be executed by a computer, it needs to be translated into a format the machine can understand (generally termed machine code). Javascript is no different. When your browser is presented with Javascript code on a website, something needs to compile or, in the case of Javascript, interpret the instructions into machine code.
V8 is the program that was developed by Google to do exactly that. When you use Chrome and it detects Javascript on a page, it passes it to V8 to run the compilation and then your computer executes the resulting code.
V8 was open sourced by Google. The creator of Node, Ryan Dahl, modified the source code so that V8 could be used outside of Chrome and inside an operating system like Linux or MacOS. That is what is meant by your first quote.
The important thing to note here is that you do not execute your Node programs in a browser but rather with the actual computer you are using. There's no correlation between V8 and Firefox, Safari, IE, etc. All of those browsers have their own Javascript interpreters.
Ok let's get through this:
Node.js is a very powerful JavaScript-based framework/platform built on
Google Chrome's JavaScript V8 Engine.
JavaScript is a programming language used in internet browsers. It was invented in 1995 by NetScape, I think, and has been submitted to a certification organization called ECMA in 1996.
ECMA has taken the original idea of JavaScript and made a standard called ECMAScript which each JavaScript implementation should follow. You see, JavaScript is not a language that just exists somewhere in the ether - each internet browser comes with it's own implementation of the language - this means that JavaScript usually only works in internet browsers such as Mozilla, Safari, Opera or Chrome for example. (Internet Explorer also comes with an implementation of ECMAScript but they call it JScript for licensing reasons I believe)
The implementation of JavaScript that comes with Google Chrome runs on the powerful V8 engine which is written in a language called C++. V8 interprets your JavaScript code and provides it all the variable types, manages memory etc. The great thing about V8 is that it is open-source and can be embedded in any other C++ program.
So the creators of Node had the idea of taking V8 and enhancing it by adding features that a server needs to serve websites - reading files, responding to requests, routing etc. This means that it is now possible to program the server-side implementation of a website using JavaScript thanks to the Node.js application that interprets your code and essentially translates it to C++ and later machine code further down the line. The important distinction is that Node.js DOES NOT run in your browser! It runs on a server much like when you code the back-end using PHP and apache.
V8 Engine is an interpreter for Javascript used in Google Chrome.
The statement that NodeJS is built on top of this engine means that it uses this interpreter for it's own thing, so it can also be used on the server, not just in the desktop environment (like in Google Chrome).
NodeJS is a separate application that you can communicate with over the internet, it's like Apache, Nginx or similar, but it's not used for one thing only (like the ones mentioned), but it's mostly used for making web-server like applications.
Node uses the same JS "engine" that runs chrome.
An engine in this case, is a piece of software that compiles, or "translates" your JS code into machine code; or the 0s and 1s your computer can understand.
This compilation is a complex process, and there are a few different approaches to solving it, for example google's v8 or mozilla's spidermonkey. Each of these support the entire JS standard (to a certain extent), i.e any JavaScript code can run on them.
When you run a node server, it runs on a machine that acts as a server. The code is not run on the user's machine at all; hence it doesn't matter which browser is used to view your content.
In the MEAN stack, it's angular code that runs on the user's computer. However, it is written in JavaScript, which can be run on any javascript engine.
Node.js is JavaScript on the server. For example, you can start a Node.js server on http://localhost:8000/, and you can access it with Chrome or Firefox.
Using Node.js (which uses V8), servers can be written in JavaScript rather than PHP or Ruby.
Actually NodeJS is cross platform server side framework. You might know as it is best suited for I/O bound and Data Streaming Applications, it uses Google Chrome's JavaScript V8 Engine for above mentioned purposes
So it's independent of browser and platform.
Instead of having V8 compile JavaScript on the fly and then execute it, isn't it possible to just compile the JavaScript beforehand and then embed the machine code in the page instead of embedding JavaScript in the page?
There are two main problems with shipping machine code on the web:
Portability. No server can afford providing appropriate machine code for all possible system architectures out there (present and future). E.g., V8 already supports 10 different CPU architectures.
Security. No client can afford to run random machine code on their machine without knowing if it can be trusted.
To address (1) you'd generally need to cross-compile machine code, which is more difficult and costly than compiling down from a high-level language. To address (2), you'd need to validate the machine code you receive, which is more difficult and costly than compiling a high-level language.
Machine code also tends to be much larger than high-level code, so there is a bandwidth issue as well.
Now, JavaScript may not be a particularly great choice of high-level language. But it is what we are stuck with as the language of the web.
The way I understand it, the V8 JavaScript engine compiles to machine code anyway so why not just do it beforehand?
According to the W3C HTML5 Scripting specification, there's no standards-based reason why a browser couldn't support machine code with special type attributes (as Chrome does with the Dart language):
The following lists the MIME type strings that user agents must recognize, and the languages to which they refer:
"application/ecmascript"
"application/javascript"
...
User agents may support other MIME types for other languages...
Currently, no browser has implemented such a feature.
I suspect the primary shortcoming of such an approach is that each chip architecture would require a machine-code version of the script compiled for it specifically. This means that in order to support three architectures, a page would need to include a compiled script three times. (And it should be included a fourth time, as plain JavaScript, as a fallback for architectures that you didn't include, or for browsers that can't/don't support compiled code.) This could significantly bloat the size of the page with data that is mostly useless. The increase in load time would seem to significantly offset or completely outweigh whatever time you save on compilation.
An architecture-independent compromise solution like bytecode seems pretty poor: you still need to include the script twice (once for the bytecode, once normally for scripts that don't support it) and you need to do some kind of run-time processing on the bytecode to turn it into machine code.
The multiple-includes-with-fallback problem is exactly why other scripting languages have not made it into the Web environment: they would need coordinated cross-vendor support to be useful. Google is trying with Dart, but it remain to be seen what degree of success they see.
Note that Chrome does cache compiled versions of scripts so a script only needs to be compiled once and then the compiled code is cached for reuse when the user re-visits the page.
I'm going to write bunch of browser extensions (the same functionality for each popular browser). I hope, that some of the code will be shared, but I'm not sure about this yet. For sure some of extensions will use native API. I have not much experience with TDD/BDD, and I thought it's good time to start folowing these ideas from this project.
The problem is, I have no idea how to handle it. Should I write different tests for each browser? How far should I go with these tests? These extensions will be quite simple - some data in a local storage, refreshing a page and listening through web sockets.
And my observation about why is it hard for me - because there is a lot of behaviour, and not so much models, which are also dependent on a platform.
I practise two different ways of testing my browser extensions:
Unit tests
Integration test
Introduction
I will use the cross-browser YouTube Lyrics by Rob W extension as an example throughout this answer. The core of this extension is written in JavaScript and organized with AMD modules. A build script generates the extension files for each browser. With r.js, I streamline the inclusion of browser-specific modules, such as the one for cross-origin HTTP requests and persistent storage (for preferences), and a module with tons of polyfills for IE.
The extension inserts a panel with lyrics for the currently played song on YouTube, Grooveshark and Spotify. I have no control over these third-party sites, so I need an automated way to verify that the extension still works well.
Workflow
During development:
Implement / edit feature, and write a unit test if the feature is not trivial.
Run all unit tests to see if anything broke. If anything is wrong, go back to 1.
Commit to git.
Before release:
Run all unit tests to verify that the individual modules is still working.
Run all integration tests to verify that the extension as whole is still working.
Bump versions, build extensions.
Upload update to the official extension galleries and my website (Safari and IE extensions have to be hosted by yourself) and commit to git.
Unit testing
I use mocha + expect.js to write tests. I don't test every method for each module, just the ones that matter. For instance:
The DOM parsing method. Most DOM parsing methods in the wild (including jQuery) are flawed: Any external resources are loaded and JavaScript is executed.
I verify that the DOM parsing method correctly parses DOM without negative side effects.
The preference module: I verify that data can be saved and returned.
My extension fetches lyrics from external sources. These sources are defined in separate modules. These definitions are recognized and used by the InfoProvider module, which takes a query, (black box), and outputs the search results.
First I test whether the InfoProvider module functions correctly.
Then, for each of the 17 sources, I pass a pre-defined query to the source (with InfoProvider) and verify that the results are expected:
The query succeeds
The returned song title matches (by applying a word similarity algorithm)
The length of the returned lyrics fall inside the expected range.
Whether the UI is not obviously broken, e.g. by clicking on the Close button.
These tests can be run directly from a local server, or within a browser extension. The advantage of the local server is that you can edit the test and refresh the browser to see the results. If all of these tests pass, I run the tests from the browser extension.
By passing an extra parameter debug to my build script, the unit tests are bundled with my extension.
Running the tests within a web page is not sufficient, because the extension's environment may differ from the normal page. For instance, in an Opera 12 extension, there's no global location object.
Remark: I don't include the tests in the release build. Most users don't take the efforts to report and investigate bugs, they will just give a low rating and say something like "Doesn't work". Make sure that your extension functions without obvious bugs before shipping it.
Summary
View modules as black boxes. You don't care what's inside, as long as the output matches is expected or a given input.
Start with testing the critical parts of your extension.
Make sure that the tests can be build and run easily, possibly in a non-extension environment.
Don't forget to run the tests within the extension's execution context, to ensure that there's no constraint or unexpected condition inside the extension's context which break your code.
Integration testing
I use Selenium 2 to test whether my extension still works on YouTube, Grooveshark (3x) and Spotify.
Initially, I just used the Selenium IDE to record tests and see if it worked. That went well, until I needed more flexibility: I wanted to conditionally run a test depending on whether the test account was logged in or not. That's not possible with the default Selenium IDE (it's said to be possible with the FlowControl plugin - I haven't tried).
The Selenium IDE offers an option to export the existing tests in other formats, including JUnit 4 tests (Java). Unfortunately, this result wasn't satisfying. Many commands were not recognized.
So, I abandoned the Selenium IDE, and switched to Selenium.
Note that when you search for "Selenium", you will find information about Selenium RC (Selenium 1) and Selenium WebDriver (Selenium 2). The first is the old and deprecated, the latter (Selenium WebDriver) should be used for new projects.
Once you discovered how the documentation works, it's quite easy to use.
I prefer the documentation at the project page, because it's generally concise (the wiki) and complete (the Java docs).
If you want to get started quickly, read the Getting Started wiki page. If you've got spare time, look through the documentation at SeleniumHQ, in particular the Selenium WebDriver and WebDriver: Advanced Usage.
Selenium Grid is also worth reading. This feature allows you to distribute tests across different (virtual) machines. Great if you want to test your extension in IE8, 9 and 10, simultaneously (to run multiple versions of Internet Explorer, you need virtualization).
Automating tests is nice. What's more nice? Automating installation of extensions!
The ChromeDriver and FirefoxDriver support the installation of extensions, as seen in this example.
For the SafariDriver, I've written two classes to install a custom Safari extension. I've published it and sent in a PR to Selenium, so it might be available to everyone in the future: https://github.com/SeleniumHQ/selenium/pull/87
The OperaDriver does not support installation of custom extensions (technically, it should be possible though).
Note that with the advent of Chromium-powered Opera, the old OperaDriver doesn't work any more.
There's an Internet Explorer Driver, and this one does definitely not allow one to install a custom extension. Internet Explorer doesn't have built-in support for extensions. Extensions are installed through MSI or EXE installers, which are not even integrated in Internet Explorer. So, in order to automatically install your extension in IE, you need to be able to silently run an installer which installs your IE plugin. I haven't tried this yet.
Testing browser extensions posed some difficulty for me as well, but I've settled on implementing tests in a few different areas that I can invoke simultaneously from browsers driven by Selenium.
The steps I use are:
First, I write test code integrated into the extension code that can be activated by simply going to a specific URL. When the extension sees that URL, it begins running the tests.
Then, in the page that activates the testing in the extension I execute server-side tests to be sure the API performs, and record and log issues there. I record the methods invoked, the time they took, and any errors. So I can see the method the extension invoked, the web performance, the business logic performance, and the database performance.
Lastly, I automatically invoke browsers to point at that specific URL and record their performance along with other test information, errors, etc on any given client system using Selenium:
http://docs.seleniumhq.org/
This way I can break down the tests in terms of browser, extension, server, application, and database and link them all together according to specific test sets. It takes a bit of work to put it all together, but once its done you can have a very nice extension testing framework.
Typically for cross-browser extension development in order to maintain a single code-base I use crossrider, but you can do this with any framework or with native extensions as you wish, Selenium won't care, it is just driving the extension to a particular page and allowing you to interact and perform tests.
One nice thing about this approach is you can use it for live users as well. If you are providing support for your extension, have a user go to your test url and immediately you will see the extension and server-side performance. You won't get the Selenium tests of course, but you will capture a lot of issues this way - very useful when you are coding against a variety of browsers and browser versions.
I've always been a bit annoyed that there are two major realms of javascript projects -- Node and "the browser" -- and while most browser JS can be easily run inside Node with a couple libraries for DOM stuff if needed, porting Node stuff to the browser is usually an afterthought.
This all seems like a lot of wasted energy on the part of developer communities, which could be alleviated by all JS developers just developing for the "least common denominator" (the browser) and using various shims to use features only available in Node or other JS environments besides the plain old browser.
This would not only cut out a lot ecosystem cruft and make development-in-the-browser more realistic, it also makes it commonplace to give the browser superpowers…Look for example at browserver, which sets up an http server inside the browser, but because the browser cannot actually accept http requests, uses websockets to talk to a proxy Node server that can.
So I want to ask, what are the real technical constraints of a web browser's javascript environment versus Node?
I thought Node was just "a javascript environment, plus http server and local filesystem, minus the DOM and the chrome". Are there technical reasons why developers could not potentially move to the approach I described above, developing for the browser JS environment (does this have an official name?) and using shims for Node?
Code that runs on the client usually have very different goals from the code that runs on the server. However when it makes sense to use some libarary's features in both environments there are a lot of them that are defined using a universal AMD form which makes them platform independent (e.g. Q).
The major difference between both environments is that one is subject to rigorous security policies and restrictions (browser) while the other isin't. The browser is also an untrustable environment for security-related operations such as enforcing security permissions.
I'll also add #jfriend00 comment in here since I believe it's also very relevant a exposes other differences:
The biggest practical difference is that you have to design a browser
application to work in an installed base of existing browsers
including older versions (lowest common denominator). When deploying a
node application, you get to select the ONE version of node that you
want to develop for and deploy with. This allows node developers to
use the latest greatest features in node which won't be available
across the general browser population for years. #jfriend00
Projects like browserver are interesting and I am all for experimental development, but are they truly useful in practice? Libraries should be designed for the environment in which they are truly useful. There are no benefits in making all libraries available in both environments. Not only that it will generally result in an increased code complexity, some features will sometimes not be shimmable resulting in an inconsistent API between platforms.
Here are some of the differences between these two environments:
Node.js has built-in libraries for HTTP and socket communication with which it can create a web server and thus be a replacement for other types of servers such as. Apache or Nginx
Node.js does not have browser APIs related to DOM, CSS, performance, document, as well as all APIs associated with the "window" object. Precisely because of the lack of a window object, the global object was renamed "global".
Node.js has full access to the system like any other source application. This means it can read and write directly to or from the file system, it can also have unlimited network access, and it can execute software...
Since the development of JavaScript is moving very fast, browsers often lag behind the implementation of new features of JavaScript, so you need to use an older version of JavaScript in the Browser, but this does not apply to Node.js, you can use it all modern ES6-7-8-9 JavaScript if your version of Node.js supports it.
Although within ES6 there is a syntax for working with modules (import/export syntax), it has only recently been added to node.js as an experimental option. Node.js mainly uses CommonJS syntax to work with modules.
https://github.com/rogerwang/node-webkit/wiki/How-to-package-and-distribute-your-apps
While packaging my node-webkit app for windows using the steps given in the above link I could not find how to avoid readability of the resulting executable from the merge by archiving software, such as WinZip. EXCERPT(from link above): "The resulting executable from the merge will still be readable by archiving software, such as WinZip."
Is it possible to avoid readability by the archiving apps?
Any help is appreciated!
Fundamentally, running node-webkit is similar to running in a browser, so just as you can't hide your webpage source, you can't truly hide your HTML and CSS in such a way that it can't be read, because it needs to be read by node-webkit at runtime.
The situation is almost the same for Javascript code, with one exception. V8 (the javascript engine in Chrome) provides a "snapshot" capability, which sort of compiles your Javascript into a sort of bytecode that V8 understands. Nwsnapshot is available for node-webkit, which will allow you to avoid shipping your JS code (or at least some of it). However, this option is still experimental, and in fact there is a problem in version 0.8.* of node-webkit (referred to as v8 in the wiki, but not to be confused with the V8 js engine), though it should be working again now in v9. Details can be found here if you're interested:
https://github.com/rogerwang/node-webkit/wiki/Protect-JavaScript-source-code-with-v8-snapshot
Also be aware that it can have performance impacts, if that matters for your application.
You could also make an exe file.
See "Step 2b: Alternative way - Making an executable file out of a .nw file" from the link you provided.