Could webassembly be a way to enforce drm? - javascript

The Idea of using compiled languages on might be a great way to increase performance significantly.
But could it used to set a drm?
For example: Some website offers browser games and doesn't want the source code to be used by others. Would webassembly-script tied deep into the game mechanics be used to detect if it is used on another site and lock it down with no way to decompile and bypass it?
I don't want to sound like a pirate with this, but it might concern adblock-users, that also block trackers.
When for example a Audio Context Fingerprinting-script is run behind without being detected, how can it be blocked?

Other than performance, Wasm does not provide any new capability that the Web doesn't already have. You will be able to see what Wasm modules are loaded, you will be able to see what's being called, and you can inspect and step through Wasm code in text form -- and arguably, Wasm code is no more obscure than e.g. minified asm.js code. Moreover, Wasm is only able to interact with the browser and the web page by calling into JavaScript.
So, no, web sites won't be able to use it to do anything behind your back other than in ways they can use already.

Related

Can I effectively use transpilation, code tokenzation/regeneration, a VM or a similar approach to guaranteeably sandbox/control code within a browser?

It looks like I'm asking about a tricky problem that's been explored a lot over the past decades without a clear solution. I've seen Is It Possible to Sandbox JavaScript Running In the Browser? along with a few smaller questions, but all of them seem to be mislabeled - they all focus on sandboxing cookies and DOM access, and not JavaScript itself, which is what I'm trying to do; iframes or web workers don't sound exactly like what I'm looking for.
Architecturally, I'm exploring the pathological extreme: not only do I want full control of what functions get executed, so I can disallow access to arbitrary functions, DOM elements, the network, and so forth, I also really want to have control over execution scheduling so I can prevent evil or poorly-written scripts from consuming 100% CPU.
Here are two approaches I've come up with as I've thought about this. I realize I'm only going to perfectly nail two out of fast, introspected and safe, but I want to get as close to all three as I can.
Idea 1: Put everything inside a VM
While it wouldn't present a JS "front", perhaps the simplest and most architecturally elegant solution to my problem could be a tiny, lightweight virtual machine. Actual performance wouldn't be great, but I'd have full introspection into what's being executed, and I'd be able to run eval inside the VM and not at the JS level, preventing potentially malicious code from ever encountering the browser.
Idea 2: Transpilation
First of all, I've had a look at Google Caja, but I'm looking for a solution itself written in JS so that the compilation/processing stage can happen in the browser without me needing to download/run anything else.
I'm very curious about the various transpilers (TypeScript, CoffeeScript, this gigantic list, etc) - if these languages perform full tokenization->AST->code generation that would make them excellent "code firewalls" that could be used to filter function/DOM/etc accesses at compile time, meaning I get my performance back!
My main concern with transpilation is whether there are any attacks that could be used to generate the kind code I'm trying to block. These languages' parsers weren't written with security in mind, after all. This was my motivation behind using a VM.
This technique would also mean I lose execution introspection. Perhaps I could run the code inside one or more web workers and "ping" the workers every second or so, killing off workers that [have presumably gotten stuck in an infinite loop and] don't respond. That could work.

Can I whitelist libraries available to user-provided Javascript?

To learn node.js, I am writing a web site that allows users to play the online game Mafia. For those unfamiliar with Mafia, it is a game most commonly played on forums, and pits an uninformed majority (the "Town") against an informed minority (the "Mafia"). However, although this is an accurate brief overview, in fact every game session can exhibit widely varying house rules that can dramatically change the game mechanics.
I want my website to be able to handle all of these variations. At first I planned for my website to implement a comprehensive framework that could run all Mafia variants itself. However, after going over a ton of rule sets for finished games archived on several different forums, I realized that the space of reasonable rules and gameplay mechanics is so huge that I would essentially have to create a new domain-specific programming language to allow all possible variants. Inventing a new language for a an otherwise straightforward personal project is rather silly and not something I'm interested in at the moment, especially given I have a perfectly good language at hand, namely JavaScript.
Therefore, I decided to let variant authors to upload a JavaScript file containing the variant code that my website will call into at the appropriate points. Essentially, JavaScript modules implementing Mafia variant game logic (which my website code will require()) will act as a scripting language to my web site's "game engine". Think Lua for C++ games. Unfortunately, this introduces a severe security problem. Unlike in the browser, node-run JavaScript has access to the file system, the network, etc. etc. So it would be trivial for a malicious user to upload a variant file that deletes the contents of my hard drive, or starts Bitcoin mining, or whatever.
My first thought was to do a replace() over each user's uploaded code for dangerous libraries such as 'fs' and 'http' into invalid strings, and catch the consequent exceptions when I try to load the file. However, this ad-hoc blacklisting technique feels like the kind of approach that one of the many people smarter/more knowledgeable that me will be able to overcome in a heartbeat. What I really need is a way to whitelist the libraries that user-uploaded code has access to. Is there a way to do this using JavaScript in node.js? If not, how would you recommend I secure the computer my node server will be running on as much as possible?
My current strategy is to require myself and a small number of trusted users to review and then vote unanimously in favor of user-uploaded JavaScript variant code before it is brought into the system, but I'm hoping there is a more automatic way of doing it.
You need to use vm module for this. Basically it allows to run scripts in customized contexts, so you can put whatever globals you want, define your own require etc.
You should also remember that in node.js it's possible to harm your app without any libraries — a user can simply add something like while (true) {} which will stop the whole process. So you need to run all untrusted code in separate processes and be ready to kill them, when they start to abuse cpu or memory.

minified JavaScript is the only way to protect the source code?

I am doing some R&D to define all the technologies involved for the developement of a multi tier application that has html5 as browser frontend.
Now I plan to write all the client in html5 css js, having a middle tier my "real" code is anyway safely at server side, anyway for different reasons there could be a reason to hide the javascript in my web pages.
Minifying it is a way to make it less readable, but is there a simple way to "hide sources"?
The js files will typically be on webfarm,but in same cases there will be an enterprise installation, and this is why i am invesetigating a way to "hide the code".
Thanks.
No, you cannot hide JS source code. If some one wants to take a peek at your JS source they will be able to.
Minification + Obfuscation are things you can do however. Note that these techniques don't protect your source, they only make it difficult to read through your source.
You cannot hide your JS. It will always be visible as long as it is downloaded from your server.
You can use some techniques to make it harder to someone to read your code. These include:
minify and obfuscate (like you pointed out). Using agreesive mode in googleClosure makes the code pretty hard to read.
interpolate client side code with server side.
Getting part of your client side code with AJAX
This only makes "reverse engeneering" harder, not impossible, specially if someone is patient enough to follow the breadcrumbs.
UPDATE:
There is an alternative way to "hide" your code. In browsers that support extensions, you can develop an extension with most of your app core funcionalities and use JS to interact with the code. Most browsers support extensions with external dlls. However, this will "force" your users to download your extension to use your webapp which might not be a good idea.
Minifying JavaScript does not in any way hide it, any developer would be able to reformat it in seconds.
By definition, there is no way to hide your JavaScript. Any code that is available for the browser to execute is available for the user to read.

Watir and Javascript

I am looking into multiple web testing tools. I am trying watir now. My main concern is dealing with javascript. I just want to know if anyone can give me an overview on dealing with javascript in watir. What are some of the pitfalls and difficulties with it? Is it basically using javascript injections to tell the page what to do?
And if someone wants to suggest other web testing tools like watir I would appreciate it.
I tried selenium first and found it to be a tad unreliable.
Are there any cheap tools on the market?
Thank You!
Watir + Javascript => generally it's possible to inject javascript into your tests e.g.
#b.goto("javascript:openWin(2)")
When you say 'dealing with javascript' I assume you mean how well does Watir handle client side code in terms of rendering/execution. Since Watir drives real browsers (like Selenium) then JS will execute generally as expected.
Watir has different many different drivers, e.g. watir, firewatir, safariwatir, chromewatir, operawatir and now watir-webdriver. All of them drive the browser in slightly different implementations depending on the browser and OS. Firewatir for example uses JSSH which is in effect controlling the browser via JS. Can you explain what you mean by Selenium being 'unreliable'?
I'd recommend looking at the latest implementation of watir-webdriver. That way you get the benefit of a nice watir API on top of a new driver implementation. Webdriver has some strong backing in terms of support (Selenium 2 uses it, Google is coding it!) so I reckon it's a safe bet. You can also control most of the major browsers with this implementation.
Alternative tools => http://wiki.openqa.org/display/WTR/Alternative+Tools+For+Web+Testing
Tim provides a pretty good answer.
The only thing I have to add to what he said is that I've found that now and then I have to use the watir methods to fire specific javascript events such as onmouseover in order to accurately simulate the user interacting with the page. Since watir has a method for this, the hard part is not the watir code, but reverse engineering the page (or noticing subtle page interactions based on user actions) to figure out what elements are 'wired' up to what events and the order to fire those events against those specific elements.
Usually it's pretty easy to look at the HTML for an element and see what's going on. But with some custom controls it can take a bit of learning because they manage to do a pretty good job of 'hiding' all the event wiring, and you may have to parse through various aspects of the page (styles and all) using something like fiddler.
(after all, the normal user will never 'force' javacript to execute, or 'inject' javascript. They will use the mouse and keyboard to interact with the page, and any javascript is going to be a result of scripts that execute when the page is loaded, or as a result of scripts triggered via events based on specific user actions)
If your JS does not trigger a HTML refresh then WATiR will get confused. When you click an object in WATiR it waits for the page to load before continuing. You can overcome this with custom waiting commands and use of '.click!'.
If you are a reasonable ruby coder then WATiR is a solution for most things. It has the potential to be a rather stable and reliable source of automated web testing.
You may want to look into Firewatir, sahi, watir-webdriver, just to give you some more leads (would suggest googling for "open source web testing" and the like if you haven't. I looked into these and many more and settled on WATiR for reasons of cost, power, flexibility and prior knowledge (in ruby and WATiR). With the right gems it will speak to most databases and to Excel (or other file) to load test data.
I'm currently using WATiR to test a ZK-generated interface where none of the IDs are ever static and there's a lot of AJAXiness going on. I just built a framework to deal with these, and it works just fine.
Also, some semi-true and true things that may help.
To pass javascript from watir use browser.execute_script()
Example:
Watir::Wait.until { $browser.execute_script("return document.readyState") == "complete" }

Are there command line or library tools for rendering webpages that use JavaScript?

Page-scraping on the Internet has seem to have hit somewhat of a wall for me, as there are more and more sites that are dependent on JavaScript for rendering portions of the screen.
It seems to me that with so many open source layout and JavaScript renderers released (like WebKit, Gecko and Chromium + V8) that someone must have made a tool for downloading a page and rendering its JavaScript without having to run an actual browser. However, I'm not turning up what I'm looking for with my searches - I've found tools like Selenium-rc, but they depend on a running browser. I'm interested in any tool or library which can do one (or both) of the following:
A program that can be run from the command line (*nix) which, given the source of a page, returns the page's source as rendered by some JS engine.
Integrated support in a particular language that allows one to (easily) pass the source of a page to it and returns the page's source as rendered by some JS engine.
I think #1 is preferable in a general sense, but #2 would be more useful if the tool exists in the language I want to work in. Also, I'm not concerned with the particular JS engine - any relatively modern one will do. What is out there?
web kit html to pdf works perfect, it can even produce jpg
http://wkhtmltopdf.googlecode.com
You can look at HTMLUnit. It's main purpose is automatic web testing, but I think it may let you get the rendered page.
Well, there's the DumpRenderTree tool which is used as part of the WebKit test suites. I'm not sure how suitable it is for turning into a standalone tool, but it does what you ask for (render HTML, run JavaScript, and dump its render tree out to disk).
Since JavaScript can do quite a lot of manipulations to the web page's document object model (DOM), it seems like to accurately scrape the content of an arbitrary page, you'd need to not only run a JavaScript engine, you'd also need a complete and accurate DOM representation of the page. That's something you'll only get if you have a real browser engine instantiated. It is possible to use an embedded, not-displayed WebKit or Gecko engine for this, then after a suitable loading delay to allow for script execution, just dump the DOM contents in HTML form.
We used Rhino sometime ago to do some automated testing from Java. It seems it'll do the job for you :)
i think there's an example code for Qt that uses the included WebKit to render a page to a pixmap. from there to a full CLI utility is just defining your needs.
of course, for most screen-scraping need you want the text, not a pixmap... if that's what you want, better check Rhino
There is the Cobra Engine for Java (http://lobobrowser.org/cobra.jsp), which handles Javascript (it also has a renderer, but that is optional). I've never used it, but have heard nice things said about it.
It's very little code to have a WebView render a page without displaying anything, but it has to be a GUI application. They can take command line arguments as well, and hide the window. Using WebKit directly it might be possible in a tool.
Apart from the complicated DOM access in Objective-C WebKit can also inject JavaScript, and together with jQuery that makes for a nice scraping solution. I don't know of any universal application doing that, though.

Categories