Is there a risk involved in using MDN bind polyfill? - javascript

I have developed a Javascript library which requires the bind method.
Unfortunately, bind is not supported by IE8.
There are a polyfill on the MDN website which work well.
My question is:
Are there problems or possible incompatibility between this polyfill and other Javascript libraries ?
It is safe to use in any case?

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/bind#Compatibility
For me most obvious differences with native bind are:
arguments.caller does not point to the caller of bound function, but you shouldn't use it anyway
length of bound function is set to 0, this may affect function arity checks like https://github.com/fitzgen/wu.js/blob/master/lib/wu.js#L406
IMHO if you are using only "the good parts" of JavaScript, and not developing core of some framework (for IE8?), you shouldn't face any problems with this polyfill.

The answer is pretty much there on the MDN page itself:
"If you choose to use this partial implementation, you must not rely on those cases where behavior deviates from ECMA-262, 5th edition! With some care, however (and perhaps with additional modification to suit specific needs), this partial implementation may be a reasonable bridge to the time when bind() is widely implemented according to the specification."
There's nothing wrong with the MDN shim as such. However if you choose to use their shim make sure that it can't be overridden by other libraries. I had an issue a while ago with Strophe doing just that and replacing one shim with another.
I tend to use underscore to cover stuff like this but there are other options like es5shim. With underscore you have a method called (you guessed it) '.bind' and works slightly differently to MDN's shim (uses 'new' invocation). Underscore also has a great method called '.partial' which can be useful in scenarios where you don't want to change the value of 'this' but partially apply arguments.
The point I am making here is that instead of shimming, maybe look at something that is properly protected/encapsulated within a library. The chances are you're going to need more than one shim in any case if you're targeting browsers like IE8.
Lastly, and not so importantly check out the performance tests at:
http://jsperf.com/browser-vs-es5-shim-vs-mdn-shim

Related

is there any difference in efficiency when using a native method like .join() instead of writting the logic behing it by ourselves?

If I write by myself the code behind the layer of abstraction of a native method, instead of using that method (which will behind scenes do the same that I write manually), have a good impact on the app performance or speed?
There are (at least) three significant advantages of using a built-in function instead of implementing it yourself:
(1) Speed - built-in functions generally invoke lower-level code provided by the browser (not in Javascript), and said code often runs significantly faster than the same code would in Javascript. Polyfills are slower than native code.
(2) Readability - if another reader of your code sees ['foo', 'bar'].join(' '), they'll immediately know what it does, and how the join method works. On the other hand, if they see something like doJoin(['foo', 'bar'], ' '), where doJoin is your own implementation of the same method, they'll have to look up the doJoin method to be sure of what's happening there.
(3) Accuracy - what if you make a mistake while writing your implementation, but the mistake is not immediately obvious? That could be a problem. In contrast, the built-in methods almost never have bugs (and, when spotted, usually get fixed).
One could also argue that there's no point spending effort on a solved problem.
Yes, there is a difference in efficiency in some cases. There are some examples in the docs for the library fast.js. To summarize what they say, you don't have to handle all of the cases laid out in the spec so sometimes you can do some things faster that the built in implementations.
I wouldn't take this as a license to make your code harder to read/maintain/reuse based on premature optimization, but yes you may gain some speed with your own implementation of a native method depending on your use case.

Why not add Array.prototype methods onto NodeList.prototype in 2017?

Why (if at all) should I avoid doing this:
if (NodeList.prototype.map === undefined) {
NodeList.prototype.map = Array.prototype.map
}
or anything else from here (I am not referring to []., I know it's slow)
As far as I know, this behavior was popular in the Prototype 1.0 and Mootools era, and was used with reckless abandon, only to quickly be depreciated because of inconsistencies between browsers.
Seems to work fine now, especially for such a conservative use. Should we still steer clear?
Because, in the event that NodeList ever gets a standard map method, it would conflict with yours.
Even if you check whether a current method exists, for example by checking to see if the prototype holds undefined before assigning a replacement, the people who write web standards have to make sure that their changes don't break our (meaning web authors') working code. Backwards compatibility is one of the core principles of the open web; we trust that if we write code today and leave it for a week or even a few years, it should still work.
Prototype and Mootools became problematic for this very reason; they were incredibly popular and they didn't hesitate to modify the prototypes of built-ins as they saw fit.
Ever wonder why .contains() is instead named the less familiar .includes() on both strings and arrays? Oh, yeah, that was because Mootools had already defined .contains() and the new standard ended up breaking a significant number of existing web pages by defining an inconsistent implementation.
TL;DR: It probably doesn't matter for any single page, but it can become a big issue when used by a popular library so people tend to avoid the practice altogether.

How do you know which parameters to set for a javascript function?

Coming from Java, Javascript can be really frustrating.
I'm hoping someone can put this into simple terms for me.
I'm struggling to understand how Javascript programmers know which parameters to pass for a method they're calling - especially when that method is being called as a callback (which in my eyes seems like an added level of complexity).
For example, take the function addEventListener. In this function, a typical use looks like
myDOMItem.addEventListener("click", function(e){...}, false);
In the documentation for this function (hyperlinked to name above) I don't see any mention of this option. Whereas in Java you can easily know if your parameters match the type especially with a good IDE, in Javascript it seems like a huge guessing game or requires serious in-depth knowledge of each function.
How do Javascript programmers do it?
The documentation you linked to does show the form in your example:
target.addEventListener(type, listener[, useCapture]);
The type parameter is the string "click", listener is the function object, and useCapture is false.
We either remember things, or get a good IDE that actually has decent support. Everyone has their favorite, so I won't presume to say that there is a "best" IDE. In my humble opinion, Webstorm is one of the best.
In the Mozilla documentation is clearly stated that the function takes four arguments:
string (type)
function/implementer of EventListener (listener)
boolean (useCapture).
boolean (wantsUntrusted). Available only for Mozilla/Gecko browsers.
As you can see from the signature, last two parameters are optional and therefore wrapped in brackets:
target.addEventListener(type, listener[, useCapture, wantsUntrusted ]);
To effectively code in JavaScript, you need to organize development environment just like for Java. Java world provides almost endless number of tools for different tasks (UI development, server-side development etc.). The same situation with JavaScript. I would like to avoid ads for IDEs/Editors. Therefore I advise you to use search to find the right tools for your JavaScript-related dev stack.
P.S. Personal opinion. I also came to JavaScript from Java. From my experience, the main problem for Java developers is not the lack of tools but the lack of strict code structure. Since Java gives powerful OOP experience, probably it will be more easily for you to start working with JavaScript in OOP terms. Good development stack for OOP-like JavaScript is provided by the open source project Google Closure Tools.

How can I prove that my JavaScript files are in the scope of a specific JS or ECMA version?

Let's say you would get a bunch of .js files and now it is your job to sort them into groups like:
requires at least JavaScript 1.85
requires at least E4X (ECMAScript 4 EX)
requires at least ECMAScript 5
or something like this.
I am interested in any solution, but especially in those which work using JavaScript or PHP. This is used for creation of automated specifications, but it shouldn't matter - this is a nice task which should be easy to solve - however, I have no idea how and it is not easy for me. So, if this is easy to you, please share any hints.
I would expect something like this - http://kangax.github.com/es5-compat-table/# - just not for browsers, rather for a given file to be checked against different implementations of JavaScript.
My guess is, that each version must have some specifics, which can be tested for. However, all I can find is stuff about "what version does this browser support".
PS: Don't take "now it is your job" literally, I used it to demonstrate the task, not to imply that I expect work done for me; while in the progress of solving this, it would be just nice to have some help or direction.
EDIT: I took the easy way out, by recquiring ECMAScript 5 to be supported at least as good as by the current FireFox for my projekt to work as intendet and expected.
However, I am still intereseted in any solution-attemps or at least an definite answer of "is possible(, with XY)" or "is not possible, because ..."; XY can be just some Keyword, like FrameworkXY or DesignPatternXY or whatever or a more detailed solution of course.
Essentially you are looking to find the minimum requirements for some javascript file. I'd say that isn't possible until run time. JavaScript is a dynamic language. As such you don't have compile time errors. As a result, you can't tell until you are within some closure that something doesn't work, and even then it would be misleading. Your dependencies could in fact fix many compatibility issues.
Example:
JS File A uses some ES5 feature
JS File B provides a shim for ES5 deficient browsers or at least mimics it in some way.
JS File A and B are always loaded together, but independently A looks like it won't work.
Example2:
Object.create is what you want to test
Some guy named Crockford adds create to Object.prototype
Object.create now works in less compatible browsers, and nothing is broken.
Solution 1:
Build or find a dependency map. You definitely already have a dependency map, either explicitly or you could generate it by iterating over you HTML files.
Run all relevant code paths in environments with decreasing functionality (eg: ES5, then E4X, then JS 1.x, and so forth).
Once a bundle of JS files fail for some code path you know their minimum requirement.
Perhaps you could iterate over the public functions in your objects and use dependency injection to fill in constructors and methods. This sounds really hard though.
Solution 2:
Use webdriver to visit your pages in various environments.
Map window.onerror to a function that tells you if your current page broke while performing some actions.
On error you will know that there is a problem with the bundle on the current page so save that data.
Both these solutions assume that you always write perfect JS that never has errors, which is something you should strive for but isn't realistic. This might; however, provide you with some basic "smoke testing" though.
This is not possible in an exact way, and it also is not a great way of looking at things for this type of issue.
Why its not possible
Javascript doesn't have static typing. But properties are determined by the prototype chain. This means that for any piece of code you would have to infer the type of an object and check along the prototype chain before determining what function would be called for a function call.
You would for instance, have to be able to tell that $(x).bind() o $(x).map are not making calls to the ecmascript5 map or bind functions, but the jQuery ones. This means that you would really have to parse out the whole code and make inferences on type. If you didn't have the whole code base this would be impossible. If you had a function that took an object and you called bind, you would have no idea if that was supposed to be Function.prototype.bind or jQuery.bind because thats not decided till runtime. In fact its possible (though not good coding practice) that it could be both, and that what is run depends on the input to a function, or even depends on user input. So you might be able to make a guess about this, but you couldn't do it exactly.
Making all of this even more impossible, the eval function combined with the ability to get user input or ajax data means that you don't even know what types some objects are or could be, even leaving aside the issue that eval could attempt to run code that meets any specification.
Here's an example of a piece of code that you couldn't parse
var userInput = $("#input").val();
var objectThatCouldBeAnything = eval(userInput);
object.map(function(x){
return !!x;
});
There's no way to tell if this code is parsing a jQuery object in the eval and running jQuery.map or producing an array and running Array.prototype.map. And thats the strength and weakness of a dynamically typed language like javascript. It provides tremendous flexibility, but limits what you can tell about the code before run time.
Why its not a good strategy
ECMAScript specifications are a standard, but in practice they are never implemented perfectly or consistently. Different environments implement different parts of the standard. Having a "ECMAScript5" piece of code does not guarantee that any particular browser will implement all of its properties perfectly. You really have to determine that on a property by property basis.
What you're much better off doing is finding a list of functions or properties that are used by the code. You can then compare that against the supported properties for a particular environment.
This is still a difficult to impossible problem for the reasons mentioned above, but its at least a useful one. And you could gain value doing this even using a loose approximation (assuming that bind actually is ecmascript5 unless its on a $() wrap. Thats not going to be perfect, but still might be useful).
Trying to figure out a standard thats implemented just isn't practical in terms of helping you decide whether to use it in a particular environment. Its much better to know what functions or properties its using so that you can compare that to the environment and add polyfills if necessary.

When to use jQuery wrapper methods instead of built-in javascript methods

Which jQuery methods should be avoided in favour of built-in methods / properties?
Example:
$('#el').each(function(){
// this.id vs $(this).attr('id');
// this.checked vs $(this).is(':checked');
});;
I use the direct javascript property like this.id in these cases:
Whenever it does exactly what I want.
When speed is important.
When all browsers I care about support exactly what I need.
I use the jQuery access method when:
There are cross browser support issues or I'm not sure that there aren't cross browser issues.
When the jQuery way has more or enhanced functionality that is useful in my circumstance.
When it's part of a chained operation and chaining works better with jQuery.
When I'm already using jQuery for this operation and it seems inconsistent to mix/match some direct access and some jQuery access.
For example: str = $(elem).html() doesn't really have any advantages over str = elem.innerHTML, but $(elem).html(str) does have some advantages over elem.innerHTML = str; because the jQuery method will clean up the objects that are being removed more completely than the innerHTML way.
This is a broad question but here's a couple of guidelines that I think are useful.
You can/should use native js methods when:
You know what you are doing. That is you know the spec and any possible inconsistencies between browsers.
You are writing performance-critical piece of code where convenience is not top priority
You are not satisfied with the way library function works. This can break down to the following:
library implementation is buggy (that happens)
library implementation is incomplete (e.g. doesn't yet support some new features that are OK to use in your case)
simply your vision of some feature is different from library's implementation
The first condition should be true in any case :) The others can combine or go separately. I understand that they are all debatable and can provoke long talks on philosophy but I think they are good starting points and summarize lots of cases.
But the bottom line is - if you don't have a reason to avoid library methods, use them!
Most DOM properties are problem-free in all major browsers. For the two examples you mention (id and checked), the DOM properties are 100% reliable and should be used in preference to the jQuery equivalent for reliability, performance and readability.
The same goes for all properties corresponding to boolean attributes: readonly, disabled, selected are common ones. Here are some more. className is also 100% solid. For other properties and attributes, it's up to you. Using a library like jQuery may be the best approach for these if you're unsure and can afford to lose some performance (usually the case).
Stack Overflow regular Andy Earnshaw has blogged about this: http://whattheheadsaid.com/2010/10/utilizing-the-awesome-power-of-jquery-to-access-properties-of-an-element

Categories