The way it is:
I have recently joined a webapp project which maintains, as a matter of standard, one single globally-available (ie, itself a property of window) object which contains, as properties or recursive sub-properties, all the functions and variables necessary to run the application — including stateful indicators for all the widgets, initialization aliases, generic DOM manipulation methods — everything. I would try to illustrate with simplified pseudo-code, but that defeats the point of my concern, which is that this thing is cyclopean. Basically nothing is encapsulated: everything single granular component can be read or modified from anywhere.
The way I'd like it:
In my recent work I've been the senior or only Javascript developer so I've been able to write in a style that uses functions, often immediately invoked / self-executing, for scoping discreet code blocks and keeping granular primitives as variables within those scopes. Using this pattern, everything is locked to its execution scope by default, and occasionally a few judiciously chosen getter/setter functions will be returned in cases where an API needs to be exposed.
…Is B more performant than A on a generic level?
Refactoring the code to functional parity from style A to style B is a gargantuan task, so I can't make any meaningful practical test for my assertion, but is the Monolithic God object anti-pattern a known performance monster compared to the scoped functional style? I would argue for B for the sake of legibility, code safety, and separation of concerns... But I imagine keeping everything in memory all the time, crawling though lookup chains etc would make it either an inherently performance-intensive exercise to access anything, or at least make garbage collection a very difficult task.
There are few things to consider, a more memory intensive program isn't necessarily slower.
Also, even if you use a self executing function, and only expose a few functions, the rest of the function is still kept in memory because the exposed functions may need it. Memory leaks because of closures, big topic on the web right now.
Now, also assuming the way v8 works, javascript code is compiled to C++ and then assembly. Now functions which are used a lot become hot code and the same cached version of the function is used over and over again. Something similar is true for objects.
But if you ever edit an object later, the object is recompiled and there is a performance hit.
If you use Try.. Catch blocks, they can't be compiled efficiently. However if you wrap the code inside the Try...Catch blocks into functions, that does help.
So really more than anything the most important tasks to speed of performance is to write anything mostly static as a function. And don't change defined objects multiple times.
Wrapping your code in a self executing function probably won't help much as it's still kept in memory. But the additional function definition might be compiled differently. Still, because it's just a wrapped function, there should be almost no difference at all.
Related
I'm building a interpreter and i'm now at the point where I need to implement it to handle closures. I understand the concept pretty well but I have a question on why they're designed the way they are.
In terms of how a closure is designed/interpreted there needs to be 3 things:
variable
body of logic that variable is bound to
environment that is saved during the closure's instantiation, this is for free variables that exist within the body to be bound when the closure variable is evaluated.
I understand why all of these things are needed, i'm just wondering about why the 3rd item is needed at all when substitution at the moment of the closure's creation is doing the same thing? Is there anything i'm not accounting for?
Essentially what i'm asking is why not just substitute the free variables with the respective environment values at closure creation instead of passing the environment entirely?
I guess it's a little late, but oh well...
That depends on your computation model (evaluation strategy for example).
If all the data structures [which can get bound by a variable, and in effect enclosed] are immutable in your language your method should work.
It works with pure lexical lisp dialects (eg functional subset of scheme), nice and smooth.
It might not work if:
You pass arguments by reference, as was already mentioned in the comments. Call by value is fine. Also references to immutable object are fine.
Your environment binding and/or lookup causes some side effect. That would be rather exotic, but who knows?
Also, mind you don't have to enclose entire environment, just the free variables of your function's body (easy!).
The only reasons [I am aware of] for implementing closures as body+environment are:
you might want to pass references to mutable objects. This happens i.a. with dictionaries in js and python; it is a bit scary to have closure changing over time, but oh well.
you don't need to write substitution function. Mind it has to keep the scoping correct, so would have to resemble your evaluation function [if your computational model is substitutional] -- so why repeat yourself? Also there is this delicate nature of values: in case of applicative order ("eager evaluation") when you substitute the value in the body, you need it to be lifted to expression who's value is the thing (if by any chance you are implementing LISP variant, think about symbols -- you don't substitute the value HI!, but the expression (quote HI!). This does not apply to cases when all your datastructures evaluate to themselves, like numbers or truth values in most LISPs). These are not problems in general, but introduce some complexity to your interpreter, and simple is good.
the bound value might be something memory-consuming, and the variable you enclose occurs [as a free variable] more than once -- you body will be significantly larger (e.g. your value is a bitmap or sound sample or some enormous matrix or... you get the picture). It is similar problem to computation duplication with lazy evaluation, but wrt memory, not time. This is also usualy not a problem as computer memories are big.
I've ran out of ideas what else might break, but if you're not into checking your computation model "on paper" (by equational reasoning) you should implement it and try the trickiest cases [if any of these applies]: side effects, lazy evaluation, references, mutable objects, combinations of the above. They are definitely not obstacles, just places worth checking.
Hope that helps, good luck with your interpreter!
PS If you like to think about this kind of stuff, check out defunctionalization ("Defunctionalization at Work" by Danvy and Nielsen is a pretty accessible read, and you should be fine with first part to get some inspiration)
In JavaScript: Understanding the Weird Parts the instructor explains that memory for variables is set up during a so-called creation phase (and that undefined is assigned); then the execution phase happens. But why is this useful when we don't know what value(s) the variable will later point to?
Clearly variables can point to many different things -from e.g. a short string all the way to a deeply nested object structure -and I assume that they can vary wildly in the amount of memory they need.
If line-by-line execution -including variable assignment -happens only in the later, execution phase, how can the initial creation phase know how to set up memory? Or, is memory set aside only for the name in each variable name/value pair, with memory for the value being managed differently?
The instructor is referring to Google Chrome's V8 engine (as is evidenced by his use of it in the video).
The V8 engine uses several optimization approaches in order to facilitate memory management. At first, it will compile the JavaScript code and during compilation it will determine how many variables (hidden classes, more later) it needs to create. These will determine the amount of memory originally allocated.
V8 compiles JavaScript source code directly into machine code when it is first executed. There are no intermediate byte codes, no interpreter. Property access is handled by inline cache code that may be patched with other machine instructions as V8 executes. 1
The first set is created by navigating the JavaScript code to determine how many different object "shapes" there are. Anything without a prototype is considered to be a "Transitioning object shape"
The main way objects are encoded is by separating the hidden class (description) from the object (content). When new objects are instantiated, they are created using the same initial hidden class as previous objects from the same constructor. As properties are added, objects transition from hidden class to hidden class, typically following previous transitions in the so-called “transition tree”. 2
Conversely, if the object does have a prototype then it will have its particular shape tracked separately.
Prototypes have 2 main phases: setup and use. Prototypes in the setup phase are encoded as dictionary objects. Any direct access to the prototype, or access through a prototype chain, will transition it to use state, making sure that all such accesses from now on are fast. 2
The compiler will essentially read all possible variables as being one of these two possible shapes and then allocate the amount of memory necessary to facilitate instantiating those shapes.
Once the first set of shapes is setup, V8 will then take advantage of what they call "fast property access" in order to build on the first set of variables (hidden classes) that were setup during the build.
To reduce the time required to access JavaScript properties V8 dynamically creates hidden classes behind the scenes 3
There are two advantages to using hidden classes: property access does not require a dictionary lookup, and they enable V8 to use the classic class-based optimization, inline caching 3
As a result, not all memory use is known during compilation, only how much to allocate for the core set of hidden classes. This allocation will grow as the code is executed, from things like assignment, inline cache misses, and conversion into dictionary mode (which happens when too many properties are assigned to an object, and several other nuanced factors).
1. Dynamic machine code generation, https://github.com/v8/v8/wiki/Design%20Elements#dynamic-machine-code-generation
2. Setting up prototypes in V8, https://medium.com/#tverwaes/setting-up-prototypes-in-v8-ec9c9491dfe2
3. Fast Property Access, https://github.com/v8/v8/wiki/Design%20Elements#fast-property-access
Javascript is a bit of a problem you know at least in my opinion
in Javascript there are specifications of the language made by ecmascript
and there are implementation made by developers
you have to understand that what is been taught about Javascript what's so called "under the hood" are the specifications of Javascript
you might have heard the terms Execution Context Lexical Environment.
They are only spec on how the language should work they give idea to the Developers how to build their JS engine in a way that it will behave like the spec.
An execution context is purely a specification mechanism and need not correspond to any particular artefact of an ECMAScript implementation. It is impossible for ECMAScript code to directly access or observe an execution context. ECMAScript
Every Javascript engine is implemented differently and they supposed to behave like the spec of ECMAScript
there is a concept that everytime when an execution context is created.
it has stages creation phase and execution phase.
everytime when creation phase begin it's allocating memory for the variables for that execution context.
in reality it doesn't work like that at all
there is no really a "Creation Phase" at least in V8 engine(Google Chrome JS Engine) in the way that you think.
i can give you a hint and tell you that eveytime when you call a function that
doesn't have another function inside it.
the variables inside that function are basically replacing some "block" in memory
i will give you a basic example let's say V8 engine uses some address in memory
let's say the address is 0x61FF1C
function setNum(){ var num = 5; }
everytime when i will call the setNum function the value for example of num will get stored at address 0x61FF1C EVERYTIME when i will call the function the value of num 5it will get stored at 0x61FF1C so it's overwriting the content that were before inside 0x61FF1C.
that's just how the v8 engine works in that scenario now the example i gave is just to get the idea i know it's sounds a little vague.
there is much more to the V8 engine which i'm not going to discuss because it's huge
by the way i'm not a V8 developer so i dont know everything about that engine
i'm not even close to that level but i know some things.
anyway i think that every JS Developer should think in the spec way but also remember that many engines behave like the spec but it doesn't mean that they work exactly like the spec
When any program executes, time of execution we call running time. During running time or processing time, processor process code and communicate with memory. Procesor takes code from memory, process it, result give back to memory, take other code. All the running time, some space in memory get grater; some space gets zero, some variables get the new value, some variables have been deleting, etc. Volume of working memory in running time is changing all the time.
In JavaScript, we have two ways of making a "class" and giving it public functions.
Method 1:
function MyClass() {
var privateInstanceVariable = 'foo';
this.myFunc = function() { alert(privateInstanceVariable ); }
}
Method 2:
function MyClass() { }
MyClass.prototype.myFunc = function() {
alert("I can't use private instance variables. :(");
}
I've read numerous times people saying that using Method 2 is more efficient as all instances share the same copy of the function rather than each getting their own. Defining functions via the prototype has a huge disadvantage though - it makes it impossible to have private instance variables.
Even though, in theory, using Method 1 gives each instance of an object its own copy of the function (and thus uses way more memory, not to mention the time required for allocations) - is that what actually happens in practice? It seems like an optimization web browsers could easily make is to recognize this extremely common pattern, and actually have all instances of the object reference the same copy of functions defined via these "constructor functions". Then it could only give an instance its own copy of the function if it is explicitly changed later on.
Any insight - or, even better, real world experience - about performance differences between the two, would be extremely helpful.
See http://jsperf.com/prototype-vs-this
Declaring your methods via the prototype is faster, but whether or not this is relevant is debatable.
If you have a performance bottleneck in your app it is unlikely to be this, unless you happen to be instantiating 10000+ objects on every step of some arbitrary animation, for example.
If performance is a serious concern, and you'd like to micro-optimise, then I would suggest declaring via prototype. Otherwise, just use the pattern that makes most sense to you.
I'll add that, in JavaScript, there is a convention of prefixing properties that are intended to be seen as private with an underscore (e.g. _process()). Most developers will understand and avoid these properties, unless they're willing to forgo the social contract, but in that case you might as well not cater to them. What I mean to say is that: you probably don't really need true private variables...
In the new version of Chrome, this.method is about 20% faster than prototype.method, but creating new object is still slower.
If you can reuse the object instead of always creating an new one, this can be 50% - 90% faster than creating new objects. Plus the benefit of no garbage collection, which is huge:
http://jsperf.com/prototype-vs-this/59
It only makes a difference when you're creating lots of instances. Otherwise, the performance of calling the member function is exactly the same in both cases.
I've created a test case on jsperf to demonstrate this:
http://jsperf.com/prototype-vs-this/10
You might not have considered this, but putting the method directly on the object is actually better in one way:
Method invocations are very slightly faster (jsperf) since the prototype chain does not have to be consulted to resolve the method.
However, the speed difference is almost negligible. On top of that, putting a method on a prototype is better in two more impactful ways:
Faster to create instances (jsperf)
Uses less memory
Like James said, this difference can be important if you are instantiating thousands of instances of a class.
That said, I can certainly imagine a JavaScript engine that recognizes that the function you are attaching to each object does not change across instances and thus only keeps one copy of the function in memory, with all instance methods pointing to the shared function. In fact, it seems that Firefox is doing some special optimization like this but Chrome is not.
ASIDE:
You are right that it is impossible to access private instance variables from inside methods on prototypes. So I guess the question you must ask yourself is do you value being able to make instance variables truly private over utilizing inheritance and prototyping? I personally think that making variables truly private is not that important and would just use the underscore prefix (e.g., "this._myVar") to signify that although the variable is public, it should be considered to be private. That said, in ES6, there is apparently a way to have the both of both worlds!
You may use this approach and it will allow you to use prototype and access instance variables.
var Person = (function () {
function Person(age, name) {
this.age = age;
this.name = name;
}
Person.prototype.showDetails = function () {
alert('Age: ' + this.age + ' Name: ' + this.name);
};
return Person; // This is not referencing `var Person` but the Person function
}()); // See Note1 below
Note1:
The parenthesis will call the function (self invoking function) and assign the result to the var Person.
Usage
var p1 = new Person(40, 'George');
var p2 = new Person(55, 'Jerry');
p1.showDetails();
p2.showDetails();
In short, use method 2 for creating properties/methods that all instances will share. Those will be "global" and any change to it will be reflected across all instances. Use method 1 for creating instance specific properties/methods.
I wish I had a better reference but for now take a look at this. You can see how I used both methods in the same project for different purposes.
Hope this helps. :)
This answer should be considered an expansion of the rest of the answers filling in missing points. Both personal experience and benchmarks are incorporated.
As far as my experience goes, I use constructors to literally construct my objects religiously, whether methods are private or not. The main reason being that when I started that was the easiest immediate approach to me so it's not a special preference. It might have been as simple as that I like visible encapsulation and prototypes are a bit disembodied. My private methods will be assigned as variables in the scope as well. Although this is my habit and keeps things nicely self contained, it's not always the best habit and I do sometimes hit walls. Apart from wacky scenarios with highly dynamic self assembling according to configuration objects and code layout it tends to be the weaker approach in my opinion particularly if performance is a concern. Knowing that the internals are private is useful but you can achieve that via other means with the right discipline. Unless performance is a serious consideration, use whatever works best otherwise for the task at hand.
Using prototype inheritance and a convention to mark items as private does make debugging easier as you can then traverse the object graph easily from the console or debugger. On the other hand, such a convention makes obfuscation somewhat harder and makes it easier for others to bolt on their own scripts onto your site. This is one of the reasons the private scope approach gained popularity. It's not true security but instead adds resistance. Unfortunately a lot of people do still think it's a genuinely way to program secure JavaScript. Since debuggers have gotten really good, code obfuscation takes its place. If you're looking for security flaws where too much is on the client, it's a design pattern your might want to look out for.
A convention allows you to have protected properties with little fuss. That can be a blessing and a curse. It does ease some inheritance issues as it is less restrictive. You still do have the risk of collision or increased cognitive load in considering where else a property might be accessed. Self assembling objects let you do some strange things where you can get around a number of inheritance problems but they can be unconventional. My modules tend to have a rich inner structure where things don't get pulled out until the functionality is needed elsewhere (shared) or exposed unless needed externally. The constructor pattern tends to lead to creating self contained sophisticated modules more so than simply piecemeal objects. If you want that then it's fine. Otherwise if you want a more traditional OOP structure and layout then I would probably suggest regulating access by convention. In my usage scenarios complex OOP isn't often justified and modules do the trick.
All of the tests here are minimal. In real world usage it is likely that modules will be more complex making the hit a lot greater than tests here will indicate. It's quite common to have a private variable with multiple methods working on it and each of those methods will add more overhead on initialisation that you wont get with prototype inheritance. In most cases is doesn't matter because only a few instances of such objects float around although cumulatively it might add up.
There is an assumption that prototype methods are slower to call because of prototype lookup. It's not an unfair assumption, I made the same myself until I tested it. In reality it's complex and some tests suggest that aspect is trivial. Between, prototype.m = f, this.m = f and this.m = function... the latter performs significantly better than the first two which perform around the same. If the prototype lookup alone were a significant issue then the last two functions instead would out perform the first significantly. Instead something else strange is going on at least where Canary is concerned. It's possible functions are optimised according to what they are members of. A multitude of performance considerations come into play. You also have differences for parameter access and variable access.
Memory Capacity. It's not well discussed here. An assumption you can make up front that's likely to be true is that prototype inheritance will usually be far more memory efficient and according to my tests it is in general. When you build up your object in your constructor you can assume that each object will probably have its own instance of each function rather than shared, a larger property map for its own personal properties and likely some overhead to keep the constructor scope open as well. Functions that operate on the private scope are extremely and disproportionately demanding of memory. I find that in a lot of scenarios the proportionate difference in memory will be much more significant than the proportionate difference in CPU cycles.
Memory Graph. You also can jam up the engine making GC more expensive. Profilers do tend to show time spent in GC these days. It's not only a problem when it comes to allocating and freeing more. You'll also create a larger object graph to traverse and things like that so the GC consumes more cycles. If you create a million objects and then hardly touch them, depending on the engine it might turn out to have more of an ambient performance impact than you expected. I have proven that this does at least make the gc run for longer when objects are disposed of. That is there tends to be a correlation with memory used and the time it takes to GC. However there are cases where the time is the same regardless of the memory. This indicates that the graph makeup (layers of indirection, item count, etc) has more impact. That's not something that is always easy to predict.
Not many people use chained prototypes extensively, myself included I have to admit. Prototype chains can be expensive in theory. Someone will but I've not measured the cost. If you instead build your objects entirely in the constructor and then have a chain of inheritance as each constructor calls a parent constructor upon itself, in theory method access should be much faster. On the other hand you can accomplish the equivalent if it matters (such as flatten the prototypes down the ancestor chain) and you don't mind breaking things like hasOwnProperty, perhaps instanceof, etc if you really need it. In either case things start to get complex once you down this road when it comes to performance hacks. You'll probably end up doing things you shouldn't be doing.
Many people don't directly use either approach you've presented. Instead they make their own things using anonymous objects allowing method sharing any which way (mixins for example). There are a number of frameworks as well that implement their own strategies for organising modules and objects. These are heavily convention based custom approaches. For most people and for you your first challenge should be organisation rather than performance. This is often complicated in that Javascript gives many ways of achieving things versus languages or platforms with more explicit OOP/namespace/module support. When it comes to performance I would say instead to avoid major pitfalls first and foremost.
There's a new Symbol type that's supposed to work for private variables and methods. There are a number of ways to use this and it raises a host of questions related to performance and access. In my tests the performance of Symbols wasn't great compared to everything else but I never tested them thoroughly.
Disclaimers:
There are lots of discussions about performance and there isn't always a permanently correct answer for this as usage scenarios and engines change. Always profile but also always measure in more than one way as profiles aren't always accurate or reliable. Avoid significant effort into optimisation unless there's definitely a demonstrable problem.
It's probably better instead to include performance checks for sensitive areas in automated testing and to run when browsers update.
Remember sometimes battery life matters as well as perceptible performance. The slowest solution might turn out faster after running an optimising compiler on it (IE, a compiler might have a better idea of when restricted scope variables are accessed than properties marked as private by convention). Consider backend such as node.js. This can require better latency and throughput than you would often find on the browser. Most people wont need to worry about these things with something like validation for a registration form but the number of diverse scenarios where such things might matter is growing.
You have to be careful with memory allocation tracking tools in to persist the result. In some cases where I didn't return and persist the data it was optimised out entirely or the sample rate was not sufficient between instantiated/unreferenced, leaving me scratching my head as to how an array initialised and filled to a million registered as 3.4KiB in the allocation profile.
In the real world in most cases the only way to really optimise an application is to write it in the first place so you can measure it. There are dozens to hundreds of factors that can come into play if not thousands in any given scenario. Engines also do things that can lead to asymmetric or non-linear performance characteristics. If you define functions in a constructor, they might be arrow functions or traditional, each behaves differently in certain situations and I have no idea about the other function types. Classes also don't behave the same in terms as performance for prototyped constructors that should be equivalent. You need to be really careful with benchmarks as well. Prototyped classes can have deferred initialisation in various ways, especially if your prototyped your properties as well (advice, don't). This means that you can understate initialisation cost and overstate access/property mutation cost. I have also seen indications of progressive optimisation. In these cases I have filled a large array with instances of objects that are identical and as the number of instances increase the objects appear to be incrementally optimised for memory up to a point where the remainder is the same. It is also possible that those optimisations can also impact CPU performance significantly. These things are heavily dependent not merely on the code you write but what happens in runtime such as number of objects, variance between objects, etc.
Someone mentioned that immediate or self-executing functions have to store the whole stack. Is this true...If so what are the pros and cons of using something like the module pattern (which is based on an immediate function) vs. a plain function?
A function is inherently private, but you can return items that you want to be public, so it can handle privacy.
The main difference I see, is that you don't have global imports or the ability to make sure that the developer ( wait that's me ) uses new with the function ( or it is complicated ).
In general when trying to provide privacy and state when should one use the module pattern and when should one just use a plain function?
A second side question is does a function provide state when used with new?
Any function closure that persists because there are lasting references to variables or functions inside it occupies some amount of memory. In today's computers (even phones), this amount of memory is generally insignificant unless you're somehow repeating it thousands of times. So, using the language features to solve your design problems is generally more important than worrying about this amount of memory.
When you say "the whole stack", the calling stack for a top-level self-executing function is very small. There's really nothing else on the stack except for the one function that's being called.
A function is also an object. So, when it's used with new, it creates a new object that can have state (it's properties) if you assign values to those properties. That's one of the main ways that objects are created in javascript. You can either call a function and examine it's return value or you can use it with new and the function serves as the constructor for a new object. A given function is usually designed to be used in one way or the other, not both.
The module pattern is generally used to control which variables are public and when making them public to put them into a structured namespace that uses very few top-level global variables. It isn't really something you choose instead of self-executing functions because they don't really solve the same problem. You can read more about the module pattern here: http://www.yuiblog.com/blog/2007/06/12/module-pattern/
You can read about a number of the options here: http://www.adequatelygood.com/2010/3/JavaScript-Module-Pattern-In-Depth and http://www.klauskomenda.com/code/javascript-programming-patterns/.
It is easier to discuss the pros/cons of a given technique in light of a specific problem that one is trying to solve or a specific design issue rather than a generic discussion of which is better when the things you've asked about are not really solving equivalent issues.
The best reference I know of for protected and private members (which can be hacked into javascript, but are not a core language feature) is this one: http://javascript.crockford.com/private.html. You are making tradeoffs when you use this method instead of the default prototype feature of the language, but you can achieve privacy if you really need it. But, you should know that javascript was not build with private or protected methods in mind so to get that level of privacy, you're using some conventions about how you write your code to get that.
This is perhaps a dumb question, but I am new to Javascript and desperate for help.
If the Javascript engine will look for global variables outside of a function, then what is the point of passing parameters to it? What do you gain?
I understand that global variables are generally frowned upon, but I still don't understand the purpose of passing variables. Does it have something to do with data encapsulation?
There's a few magic words that are used by programmers to describe different kinds of functions. Here's a few:
Re-entrant
ThreadSafe
Referentially Transparent
Idempotent
Pure
Side-Effects
You can look some of them up if you want a headache. The point is that Computer science and engineering progress has always been about reducing complexity. We have spent quite a lot of time thinking about the best way to write a function to achieve that goal. Hopefully, you can stuff tiny bits of your program into your head at a time, and understand those bits, without having to also understand the overall functioning of the entire program simultaneously, or the detailed implementation of the insides of all the other functions. A function that uses global variables can't do that very well because:
You can't guarantee that the global variables exist
You can't guarantee that the global variables are what you think they are
You can't guarantee that other parts of the program haven't modified those variables in a way you didn't expect.
You can't easily generalise to use the function multiple times on multiple sets of variables.
You can't easily verify that the function works as advertised without first setting up the function's external environment and its dependencies.
If the global variables have changed in a way you didn't expect, it's really hard to track down which part of the program is the culprit. It could be any of 500 different functions that write to that variable!
On the other hand, if you explicitly pass in all the data a function needs to operate, and explicitly return all the results:
If something goes wrong with any of those variables, it's easy to find the source of the problem
It's easier to add code to verify the "domain" of your inputs. Is it really a string? Is it over a certain length, is it under a certain length? Is it a positive number? is it whole, or fractional? All these assumptions that your code needs to operate correctly can be explicit at the start of the function, instead of just crossing your fingers and hoping nothing goes wrong.
It's easier to guess what a particular function will actually do, if its output depends only on its input.
a function's parameters are not dependant on the naming of any external variables.
And other advantages.
if you were only going to use global variables that the functions worked on then you'd always have to know the inner workings of the functions and what your global variable names had to be for them to work.
also, something like Math.abs(n) would be hard to call twice in one line if using global variables.
Functions are reusable components of your code, that executes a particular snippet on the provided variable exhibiting varying behavior.
Encapsulation comes from being Object Oriented. Functions are more for giving structure to your program.
Also, you shouldn't undermine the execution time of a method, if the variable it access exists in the context rather than being global.
If you don't need parameters to be passed to functions, then, you really don't need functions.
Functions are usually (and should be) used to provide code re-use -- use the same function on different variables. If a function accesses global variables, then every time I use it it will perform the same action. If I pass parameters, I can make it perform a different action (based on those different parameters) every time I use it.
One of the main benefits is that it keeps all the information the function needs, nearby. It becomes possible to look at just the function itself and understand what its input is, what it does, and what its output will be. If you are using global variables instead of passing arguments to the function, you’ll have to look all over your code to locate the data on which the function operates.
That’s just one benefit, of many, but an easy one to understand.
Global variables (in any language) can sometimes become stale. If there is a chance of this it is good to declare, initialise and use them locally. You have to be able to trust what you are using.
Similarly, if something/someone can update your global variables then you have to be able to trust the outcome of what will happen whenyou use them.
Global variables are not always needed by everything so why keep them hanging around?
That said, variables global to a namesapce can be useful especially if you are using something like jquery selectors and you want to cache for performance sake.
Is this really a javascript question? I haven't encountered a language that doesn't have some form of global variables yet.