I found this gist to implement a sandbox for 3rd-party code using with and the Harmony direct proxies. How useful is it? Would it be possible to implement a proper javascript sandbox using proxies? What are the quirks and / or downsides of this approach?
(I'm looking for a javascript-only solution in this question, so no Caja and similar server-side projects)
In principle, that approach should probably work. However, a couple of things to note:
Clearly, this requires putting all untrusted code into the with-scope. In practice, that might become rather unwieldy.
Moreover, it subtly changes the meaning of outermost var/function declarations contained in that code, which now become local instead of being properties on the global object. Undeclared variables, on the other hand, will still end up on the global object. This may break some programs.
Because of the insane semantics of 'with', modern JavaScript VMs give up most attempts to optimise code in its scope. Generated code can easily be two orders of magnitude slower for something that has a 'with'.
So overall, I wouldn't recommend this approach. You are far better off with SES or Caja (not sure in which sense you call those server-side).
(It's also worth noting that ES6's module loaders will provide a cleaner way to sandbox the global object. But it is hard to tell when those will become available. Not soon.)
Related
I understand at a high level why one would not want to allow arbitrary code to execute in a web browser via the JS eval() function.
But I wonder if there are any practical approaches to preventing attacks by parsing the code that is passed to eval() to check that it is safe. For example:
disallowing any flow control functions, e.g. for, while. (Should stop infinite loops)
disallowing any variable names / function calls that don't match a whitelist. (Should stop any access to the DOM, built-in APIs, or malicious functions)
If you don't think this can be done safely, could you describe the predicted pitfalls? It's valuable to me if somebody says "this isn't practical because X" rather than just some blanket statement. Trust me - if I can't convince myself with certainty that it can be done safely, I won't do it.
I know that I can write my own my expression evaluator or use a 3rd-party library that does the same. And I may do that. But I remain interested in using eval() because of certain advantages - native implementation performance and language consistency.
Yes, in general this is possible - basically you develop your own programming language that you know does only safe operations, you write your own parser for it, you write your own interpreter for it, and then you optimise that interpreter into a compiler targeting JavaScript that runs the result through eval.
However, using JavaScript as the base language and then stripping away unsafe parts, or even whitelisting some things, is not a good approach. The "whitelisting" would need to be sophisticated enough that starting to develop your own language is generally simpler. The two example restriction you've presented in your question fail to reach their goals: to avoid infinite execution you also need to prevent recursion, and to prevent access to builtins you more or less also need to prevent dynamic property access. A lot of work has been done to define a proven "safe" subset of ECMAScript that one could fearlessly evaluate, and believe me, it is far from trivial.
So no, this is not a practical approach.
So, fundamentally, this question is Not opinion-based. I seriously pursuit this issue objectively without feeling mostly arisen from the predominant opinion - Why is extending native objects a bad practice?
and this quesion is related but unanswered questions:
If Object.prototype is extended with Symbol property, is it possible to break code without specifically designed APIs to access Symbol in JavaScript?
Should the extension of built-in Javascript prototypes through symbols also be avoided?
The first question is already closed as they say it's opinion based, and as you might know in this community once a question is Banned, however we modified it, moderators will never bother to re-open. That is the way how the things work here.
For the second question. For some unknown reason, the question has been taken more seriously and not considered as opinion based although the context is identical.
There are two parts to the "don't modify something you don't own" rule:
You can cause name collisions and you can break their code.
By touching something you don't own, you may accidentally overwrite
something used by some other library. This will break their code in
unexpected ways.
You can create tight dependencies and they can break your code.
By binding your code so tightly to some other object, if they make
some significant change (like removing or renaming the class, for
example), your code might suddenly break.
Using symbols will avoid #1, but you still run into #2. Tight dependencies between classes like that are generally discouraged. If the other class is ever frozen, your code will still break. The answers on this question still apply, just for slightly different reasons.
Also, I've read opinions(how can we discuss such a thing here without "opinion" base?), they claim
a) Library code Using Symbols exists and they may tweak Symbol API (such as Object.getOwnPropertySymbols())
b) extending object property with Symbol is not different from non-Symbol property fundamentally.
Here, for the major rationale of untouching Object.prototype is due to #1, almost all answers I saw claimed that and we don't have to discuss if there is no Symbol usage.
However, Using symbols will avoid #1 as they say. So most of the traditional wisdom won't apply anymore.
Then, as #2 says,
By binding your code so tightly to some other object, if they make some significant change (like removing or renaming the class, for example), your code might suddenly break.
well, in principle, any fundamental API version upgrade will break any code. The well-known fact is nothing to do with this specific question. #2 did not answer the question.
Only considerable part is Object.freeze(Object.prototype) can be the remaining problem. However, this is essentially the same manner to upgrade the basic API by some other unexpectedly.
As the API users not as API providers, the expected API of Object.prototype is not frozen.
If some other guys touch the basic API and modifies it as frozen, it is he/she who broke the code. They upgraded the basic API without notice.
For instance, in Haskell, there are many language extensions. Probably they solve the collision issue well, and most importantly, they won't "freeze" the basic API because freezing the basic API would brake their eco.
Therefore, I observe that Object.freeze(Object.prototype) is the anti-pattern. It cannot be justified as a matter of course to prevent Object.prototype extension with Symbols.
So here is my question. Although I observe this way, is it safe to say:
In case of that Object.freeze(Object.prototype) is not performed, which is the anti-pattern and detectable, it is safe to perform extending Object.prototype with Symbols?
If you don't think so, please provide a concrete example.
Is it safe? Yes, if you are aware of all the hazards that come with it, and either choose to ignore them, shrug them off, or invest effort to ensure that they don't occur (testing, clear documentation of compatibility requirements), then it is safe. Feel free to do it in your own code where you can guarantee these things.
Is it a good idea? Still no. Don't introduce this in code that other people will (have to) work with.
If some other guys touch the basic API and modifies it as frozen, it is he/she who broke the code. Therefore, I observe that Object.freeze(Object.prototype) is the anti-pattern.
Not quite. If you both did something you shouldn't have done, you're both to blame - even if doing only one of these things gets away with working code. This is exactly what point #2 is about: don't couple your code tightly to global objects that are shared with others.
However, the difference between those things is that freezing the prototype is an established practice to harden an application against prototype pollution attacks and generally works well (except for one bit),
whereas extending the prototype with your own methods is widely discouraged as a bad practice (as you already found out).
In Haskell, there are many language extensions. Probably they solve the collision issue well, and most importantly, they won't "freeze" the basic API because freezing the basic API would brake their eco.
Haskell doesn't have any global, shared, mutable object, so the whole problem is a bit different. The only collision issue is between identifiers from "star-imported" modules, including the prelude from the base API. However, this is per module, not global, so it doesn't break composability as you can resolve the same identifier to different functions in separate modules.
Also yes, their base API is frozen and versioned, so they can evolve it without breaking old applications (who can continue using old dependencies and old compilers). This is a luxury that JavaScript doesn't have.
Is it safe to extend Object.prototype with a pipe symbol so that something[pipe](f) does f(something), like something |> f in F# or the previous proposal of pipe-operator?
No, it's not safe, not for arbitrary values of something. Some obvious values where this doesn't work are null and undefined.
However, it doesn't even work for all objects: there are objects that don't have Object.prototype on their prototype chain. One example is Object.create(null) (also done for security purposes), another example are objects from other realms (e.g. iframes). This is also the reason why you shouldn't expect .toString() to work on all objects.
So for your pipe operator, better use a static standalone method, or just use a transpiler to get the syntax you actually want. An Object.prototype method is only a bad approximation.
Extending the Object prototype is a dangerous practice.
You have obviously done some research and found that the community of javascript developers overwhelmingly considers it to be a very bad practice.
If you're working on a personal project all by yourself, and you think the rest of us are all cowards for being unwilling to take the risk, then by all means: go ahead and modify your Object prototype! Nobody can stop you. It's up to you whether you will be guided by our advice. Sometimes the conventional wisdom is wrong. (Spoiler: in this case, the conventional wisdom is right.)
But if you are working in a shared repository, especially in any kind of professional setting, do not modify the Object prototype. Whatever you want to accomplish by this technique, there will be alternative approaches that avoid the dangers of modifying the base prototypes.
The number one job of code is to be understood by other developers (including yourself in the future), not just to work. Even if you manage to make this work, it is counterintuitive, and nobody who comes after you will expect to find this. That makes it unacceptable by definition, because what matters here is reasonable expectations, NOT what the language supports. Any person who fails to recognize that professional software development is a team effort has no place writing software professionally.
You are not going to find a technical limitation to extending the Object prototype. Javascript is a very flexible language -- it will give you plenty of rope with which to hang yourself. That does not mean it's a good idea to place your head in the noose.
I have some procedural javascript code that I have written for an open-source application and I'd Like to refactor it into OOP and since I have very little experience with javascript frameworks I have trouble finding a one suitable for my needs, though I haven't tried anything yet, I just read about AngularJS, Backbone.js and Knockout.
I want to structure the code, because, at the moment, there's a mess, global variables and functions scattered around.
I have to mention that all the business logic is handled at the server level, so the client-side code handles just the UI using the data it receives or requests from the server.
The code can be found here:
https://github.com/paullik/webchat/blob/asp.net/webchat/Static/Js/chat.js
Do you have any suggestions?
Object-Oriented JavaScript is not necessarily the answer to all your
problems.
My advice is to be careful the choice you pick on this topic.
In practice, OO-JS can add more complexity to your code for the sake of trying to be more like traditional object-oriented languages. As you probably know, JS is unique.
It is important to know that there are Design Patterns that will structure your code and keep implementation light and flexible.
It is Design Patterns that I see structuring advanced JS
implementations, not OO. To paraphrase Axel Rauchmeyer - "Object
Oriented methodology does not fit into basic JavaScript syntax, it is
a twisted and contorted implementation, and JS is far more expressive
with out it."
The root of this analysis boils down to the fact that JS has no class. In essence, since everything is an object, you already have object-oriented variables and functions. Thus the problem is slightly different than the one found in compiled languages (C/Java).
What Design Patterns are there for JavaScript?
An excellent resource to check is Addy O' Somani and Essential Design Patterns.
He wrote this book on Design Patterns in JavaScript.
But there is more... much more.
A. require.js - There is a way to load modules of JS code in a very impressive way.
These are generally called a module loaders, and are widely considered the future of loading js files since they optimize performance at runtime. yepnope and others exist. Keep this in mind if you are loading more than a few js files. (moved to top by request).
B. MVC - There are dozens of Model View Controller frameworks to help you structure code.
It is a Pattern, but may be unreasonable for your purposes. You mentioned backbone, knockout and angular... Yes. These can do the trick for you, but I would be concerned that they might be 1) high learning-curve, and 2) overkill for your environment.
C. Namespace or Module Pattern. Are probably the most important for your needs.
To solve global variables just wrap them in a namespace and reference that.
These are very good patterns that give rise to module loaders.
D. Closure - You mentioned OO JS. On piece of this that is useful is the notion of closures to provide yourself with... private members. At first this is a cryptic idea, but after you recognize the pattern, it is trivial practice.
E. Custom Events - It becomes very important to not use hard references between objects. Example: AnotherObject.member; This is because it will tightly couple the two objects together and make both of them inflexible to change. To solve this, trigger and listen to events. In traditional Design Patterns this is the Observer. In JS it is called PubSub.
F. The Callback - The callback pattern is what enabled AJAX and it is revolutionizing development in terms of Window 8, Firefox OS, and Node.js - because of something called non-blocking-io. Very important.
Do not be afraid. This is the direction to go for long-term and advanced JavaScript implementations.
Once you recognize the patterns, it is down hill from there.
Hope this helps.
I am new to CoffeeScript and am trying to get a feel for the best way of managing and building a complex application that will run in the browser. So, I am trying to determine what is the best way to structure my code and build it; with consideration for scope, testing, extensibility, clarity and performance issues.
One simple solution suggested here (https://github.com/jashkenas/coffee-script/wiki/%5BHowTo%5D-Compiling-and-Setting-Up-Build-Tools) seems to be maintain all your files/classes separately - and the use a Cakefile to concatenate all your files into a single coffee file and compile that. Seeems like this would work, in terms of making sure everything ends up in the same scope. It also seems like it makes deployment simple. And, it can be automated, which is nice. But it doesn't feel like the most elegant or extensible solutions.
Another approach seems to be this functional approach to generating namespaces (https://github.com/jashkenas/coffee-script/wiki/Easy-modules-with-CoffeeScript). This seems like a clever solution. I tested it and it works, but I wonder if there are performance or other drawbacks. It also seems like it could be combined with the above approach.
Another option seems to be assigning/exporting classes and functions to the window object. It sounds like that is a fairly standard approach, but I'm curious if this is really the best approach.
I tried using Spine, as it seems like it can address these issues, but was running into issues getting it up and running (couldn't install spine.app or hem), and I suspect it uses one or more of the above techniques anyways. I would be interested if javascriptMVC or backbone solves these issues - I would consider them as well.
Thanks for your thoughts.
Another option seems to be assigning/exporting classes and functions to the window object. It sounds like that is a fairly standard approach, but I'm curious if this is really the best approach.
I'd say it is. Looking at that wiki page's history, the guy advocating the concatenation of .coffee files before compilation was Stan Angeloff, way back in August 2010, before tools like Sprockets 2 (part of Rails 3.1) became available. It's definitely not the standard approach.
You don't want multiple .coffee files to share the same scope. That goes against the design of the language, which wraps each file in a scope wrapper for a reason. Having to attach globals to window to make them global saves you from making one of the most common mistakes JavaScripters run into.
Yes, helper duplication duplication can cause some inefficiency, and there's an open discussion on an option to disable helper output during compilation. But the impact is minor, because helpers are unlikely to account for more than a small fraction of your code.
I would be interested if javascriptMVC or backbone solves these issues
JavaScript MVC and BackBone don't do anything with respect to scoping issues, except insofar as they cause you to store data in global objects rather than as local variables. I'm not sure what you mean when you say that Spine "seems like it can address these issues"; I'd appreciate it if you'd post another, more specific question.
In case you would prefer the node.js module system, this gives you the same in the browser: https://github.com/substack/node-browserify
File foo.coffee:
module.exports = class Foo
...
File bar.coffee:
Foo = require './foo'
# do stuff
YUI Compressor was the consensus best tool for minimizing, but Closure seems like it could be better.
"Whichever you find best for you" I think is the general answer at the moment - YUI has been available longer so undoubtedly will be the one which currently has the consensus as being the best tool. Whereas Closure is new to us - so there isn't the wealth of experience with Closure as there is with YUI. Hence I don't think you'd find a compelling real-world arguments of why to use Closure based on people's experiences with it simply because it's new.
That's not to say you shouldn't use Closure....just my round about way of saying, I don't think there's an answer available to this until a number of people have used the 2 and compared them.
Edit:
There are a couple of early comparisons, saying Closure does give an improvement:
http://blog.feedly.com/2009/11/06/google-closure-vs-yui-min/
http://news.ycombinator.com/item?id=924426
Further Edit:
Worth keeping an eye on issue list for Closure: http://code.google.com/p/closure-compiler/issues/list
From the comparisons I've seen, Closure seems to be the clear winner in terms of minimizing file size. This article uses three popular JS libraries (jQuery, Prototype, MooTools) to compare compression between YUI Compressor and Closure Compiler:
http://www.bloggingdeveloper.com/post/Closure-Compiler-vs-YUI-Compressor-Comparing-the-Javascript-Compression-Tools.aspx
Closure comes out in front in each test, particularly in its advanced mode, where it "minimizes code size about 20-25% more than YUI Compressor by providing nearly 60% compression."
Closure can be used in the Simple mode or the Advanced mode. Simple mode is fairly safe for most JavaScript code, as it only renames local variables in functions to get further compression.
Advanced mode is much more aggressive. It will rename keys in object literals, and inline function calls if it can determine that they return simple values with no side effects.
For example:
function Foo()
{
return "hello";
}
alert(Foo());
is translated to:
alert("hello");
And this code:
var o = {First: "Mike", Last: "Koss"};
alert(o);
is translated to:
alert({a:"Mike",b:"Koss"});
You can prevent the Advanced mode from changing key values in object literals by quoting the names like this:
{'First': "Mike", 'Last': "Koss"}
You can try out these and other examples at google's interactive Closure Compiler site.
Looks like jQuery 1.5 just moved to UglifyJS:
Additionally with this switch we’ve
moved to using UglifyJS from the
Google Closure Compiler. We’ve seen
some solid file size improvements
while using it so we’re quite pleased
with the switch.
I think it depends on your code. If you want to compile your own code, then I think it is worth it to patch the code so that it works with Closure Compiler (some things might seem a bit awkward at the start). I believe Closure Compiler soon will be the top choice for such jobs and it will also make you to tidy up your code a bit and maintain consistent style (of course it depends on your preferences, you might hate some parts, I do :P ).
If you depend on other libraries then in my opinion you should wait a bit until they release Closure Compiler compatible versions. It shouldn't take much time for most popular libraries out there. And maybe you can provide fixes for those "not-so-active" libraries which you use yourself.
I'm talking about Advanced Compilation mode here, the Simple Compilation mode as some has pointed out is fairly safe to use.
And here's a different opinion - Google Closure ? I'm Not Impressed. It's maybe a little bit too harsh, but nice read. I guess only time will tell which one is better =)
As of october 2012, looks like YUI compressor is now deprecated, or at least no longer going to be used in YUI: http://www.yuiblog.com/blog/2012/10/16/state-of-yui-compressor/
You can make some tests here, and see what is better in each browser:
http://jsperf.com/closure-vs-yui