This is probably an odd question because it's more typical for people to ask how to avoid using globals.
Coming from the Ruby world, I've become very comfortable using globals in two specific examples:
Constants. When a file is imported in Ruby, all of its constants are automatically made available to the other files in the program.
(and this ties in with the first) Packages. When I load a Ruby Gem in a required file, it also becomes available in my other files.
I've been starting to use module.exports, but I'm finding that I'm importing same modules in lots of different files.
I'd really like to have these features in Javascript. The way I'm writing my code at the moment, I'm using a functional approach and passing all my constants as parameters. The problem is my code is getting too verbose for my liking.
I'm really not looking for a "short answer: no" type of response, here. Even if it is too difficult, I'd appreciate being pointed in a direction for how to avoid passing constants as parameters to functions.
One method of using globals could be to use HTML5 Local Storage.
My thinking is, have an object with your globals, and on page load save each global variable into its own local storage location.
So you have an object with your globals stored:
var globals = {
GLOBAL1: "SomeString",
GLOBAL2: 400
}
Then onload / or if you want to do it sooner have it called before the page loads, you can have a function run through your globals and save the values into local storage
for(var key in globals) {
localStorage.setItem(key, globals[key]);
}
Then, later on, when a function needs, for example GLOBAL2 you can call:
localStorage.getItem("GLOBAL2");
Related
I understand the scoping rules of Node/JavaScript, and from, for example, Understanding Execution Context and Execution Stack in Javascript, I think I understand the principle of how Execution Contexts work: the question is can you actually get access to them?
From an answer to the 2015 question How can I get the execution context of a javascript function inside V8 engine (involving (ab)using (new Error()).stack), and the answer to the 2018 question How can we get the execution context of a function?, I suspect the answer is "no" but just in case things have changed: is it possible to access/modify the Execution Context of a Node module/function?
(I'm very aware this screams either XY Problem or a desire to abuse Node/JavaScript: hopefully the Background section below will provide more context, and – if there's a different way of achieving what I want – that will work as well.)
In a nutshell, I want to achieve something like:
sharedVar = 1 ;
anotherModule.someFunction() ;
console.log( sharedVar ) ; // Where 'sharedVar' has changed value
Having a function in a different module being able to change variables in its caller's scope at will seems the definition of "A Dangerous Thing™", so – if it's possible at all – I expect it would need to be more like:
sharedVar = 1 ;
anotherModule.hereIsMyExecutionContext( SOMETHING ) ;
anotherModule.someFunction() ;
console.log( sharedVar ) ; // Where 'sharedVar' has changed value
and anotherModule would be something like:
let otherExecutionContext ;
function hereIsMyExecutionContext( anExecutionContext ) {
otherExecutionContext = anExecutionContext ;
}
function someFunction() {
//do something else
otherExecutionContext.sharedVar = 42 ;
}
and the question becomes what (if anything) can I replace SOMETHING with?
Notes / Things That Don't Work (for me)
You shouldn't be trying to do this! I realize what I'm trying to achieve isn't "clean" code. The use-case is very specific, where brevity (particularly in the code whose value I want changing) is paramount. It is not "production" code where the danger of forgotten, unexpected side-effects matters.
Returning a new value from the function. In my real use-case, there would be several variables that I would like someFunction() to be able to alter. Having to do { var1, var2, ... varN } = anotherModule.someFunction() would be both inconvenient and distracting (while the function might change some of these variables' values, it is not the main purpose of the function).
Have the variables members of anotherModule. While using anotherModule.sharedVar would be the clean, encapsulated way of doing things, I'd really prefer not to have to use the module name every time: not only is it more typing, but it distracts from what the code that would be using these variables is really doing.
Use the global scope. If I wasn't using "use strict";, it would be possible to have sharedVar on the global object and freely accessible by both bits of code. As I am using strict-mode (and don't want to change that), I'd need to use global.sharedVar which has the same "cumbersomeness" as attaching it to anotherModule.
Use import. It looks like using import { sharedVar } from anotherModule allows "live" sharing of the variables I want between the two modules. However, by using import, the module using the shared variable has to be an ES Module (not a CommonJS Module), and cannot be loaded dynamically using require() (as I also need it to be). Loading it dynamically the ESM way (using import() as a function returning a promise) appears to work, but repeated loadings come from a cache. According to this answer to How to reimport module with ES6 import, there isn't a "clean" way of clearing that cache (cf. delete require.cache[] that works with CommonJS modules). If there is a clean way of invalidating a cached ESM loaded through import(), please feel free to add an answer to that question (and a comment here, please), although the current Node JS docs on ESMs don't mention one, so I'm not hopeful :-(
Background
In early December, a random question on SO alerted me to the Advent of Code website, where a different programming problem is revealed everyday, and I decided to have a go using Node. The first couple of problems I tackled using standalone JS files, at which point I realized that there was a lot of common code being copy-pasted between each file. In parallel with solving the puzzles, I decided to create a "framework" program to coordinate them, and to provide as much of the common code as possible. One goal in creating the framework was that the individual "solution" files should be as "lean" as possible: they should contain the absolute minimum code over that needed to solve the problem.
One of the features of the framework relevant to this question is that it reloads (currently using require()) the selected solution file each time, so that I can work on the solution without re-running the framework... this is why switching to import and ES Modules has drawbacks, as I cannot (cleanly) invalidate the cached solution module.
The other relevant feature is that the framework provides aoc.print(...) and aoc.trace(...) functions. These format and print their arguments: the first all the time; the second conditionally, depending on whether the solution is being run normally or in "trace" mode. The latter is little more than:
function trace( ... ) {
if( traceMode ) {
print( ... )
}
}
Each problem has two sets of inputs: an "example" input, for which expected answers are provided, and the "real" input (which tends to be larger and more involved). The framework would typically run the example input with tracing enabled (so I could check the "inner workings") and run the "real" input with tracing disabled (because it would produce too much output). For most problems, this was fine: the time "wasted" preparing the parameters for the call, and then making the call to aoc.trace() only to find there was nothing to do, was negligible. However, for one solution in particular (involving 10 million iterations), the difference was significant: nearly 30s when making the ignored trace; under a second if they calls were commented-out, or I "short-circuited" the trace-mode decision by using the following construct:
TRACE && aoc.print( ... )
where TRACE is set to true/false as appropriate. My "problem" is that TRACE doesn't track the trace mode in the framework: I have to set it manually. I could, of course, use aoc.traceMode && aoc.print( ... ), but as discussed above, this is more to type than I'd like and makes the solution's code more "cluttered" than I'd ideally want (I freely admit these are somewhat trivial reasons...)
One of the features of the framework relevant to this question is that it reloads the selected solution file each time
This is key. You should load not leave the loading, parsing and execution to require, where you cannot control it. Instead, load the file as text, do nefarious things to the code, then evaluate it.
The evaluation can be done with eval, new Function, the vm module, or by messing with the module system.
The nefarious things I was referring to would most easily be prefixing the code by some "implicit imports", whether you do that by require, import or just const TRACE = true;. But you can also do any other kind of preprocessing, such as macro replacements, where you might simply remove lines that contain trace(…);.
Looks like you're heading the wrong way. The Execution Context isn't meant to be accessible by the user code.
An option is to make a class and pass its instances around different modules for your purpose. JS objects exist in the heap and can be accessed anywhere as long as you have its reference, so you can control it at will.
I'm using a query on both server and client (pub/sub). So I have something like this at a few different locations.
const FOO = 'bar';
Collection.find({property:FOO})
Foo may potentially change and rather than have to update my code at separate locations, I was thinking it may be worth it to abstract this away to a global variable that is visible by both client and server.
I created a new file 'lib/constants.js' and simply did FOO = 'bar; (note no keyword). This seems to work just fine. I found this solution as the accepted answer How can I access constants in the lib/constants.js file in Meteor?
My question is if this a desired pattern in Meteor and even general JS.
I understand I can abstract this away into a module, but that may be overkill in this case. I also think using session/reactive vars is unsafe as it can kinda lead to action at a distance. I'm not even gonna consider using settings.json as that should only be for environment variables.
Any insights?
yes, If you are using older version of meteor then you can use setting.json but for updated version we have import option.
I don't think the pattern is that bad. I would put that file in /imports though and explicitly import it.
Alternatively, you can write into Meteor.settings.public from the server, e.g., on start-up, and those values will be available on the client in the same location. You can do this without having a settings file, which is nice because it doesn't require you to make any changes between development and production.
Server:
Meteor.startup(() => {
// code to run on server at startup
Meteor.settings.public.FOO = 'bar';
});
Client:
> console.log(Meteor.settings.public.FOO);
bar
This is actually a b̶a̶d̶ unfavoured pattern because with global variables you cannot track what things changed and in general constructing a modular and replaceable components is much better. This pattern was only made possible due to Meteor early days where imports directory/pattern was not supported yet and you'd have your entire code split up between both,server and client.
https://docs.meteor.com/changelog.html#v13220160415
You can find many write ups about it online and event stackoverflow answers so I don't want to restate the obvious.
Using a settings.json variable is not an option since we may dynamically change so what are our options? For me I'd say:
Store it the database and either publish it or retrieve it using methods with proper access scoping of course. Also you can dynamically modify it using methods that author DB changes.
Or, you may try using Meteor.EnvironmentVariable. I'd be lying if I said I know how to use it properly but I've seen it being used in couple Meteor projects to tackle a similar situation.
https://www.eventedmind.com/items/meteor-dynamic-scoping-with-environment-variables
Why are global variables considered bad practice?
I don't understand WHY and in what scenario this would be used..
My current web setup consists of lots of components, which are just functions or factory functions, each in their own file, and each function "rides" the app namespace, like : app.component.breadcrumbs = function(){... and so on.
Then GULP just combines all the files, and I end up with a single file, so a page controller (each "page" has a controller which loads the components the page needs) can just load it's components, like: app.component.breadcrumbs(data).
All the components can be easily accessed on demand, and the single javascript file is well cached and everything. This way of work seems extremely good, never saw any problem with this way of work. of course, this can (and is) be scaled nicely.
So how are ES6 imports for functions any better than what I described?
what's the deal with importing functions instead of just attaching them to the App's namespace? it makes much more sense for them to be "attached".
Files structure
/dist/app.js // web app namespace and so on
/dist/components/breadcrumbs.js // some component
/dist/components/header.js // some component
/dist/components/sidemenu.js // some component
/dist/pages/homepage.js // home page controller
// GULP concat all above to
/js/app.js // this file is what is downloaded
Then inside homepage.js it can look like this:
app.routes.homepage = function(){
"use strict";
var DOM = { page : $('#page') };
// append whatever components I want to this page
DOM.page.append(
app.component.header(),
app.component.sidemenu(),
app.component.breadcrumbs({a:1, b:2, c:3})
)
};
This is an extremely simplified code example but you get the point
Answers to this are probably a little subjective, but I'm going to do my best.
At the end of the day, both methods allow support creating a namespace for a piece of functionality so that it does not conflict with other things. Both work, but in my view, modules, ES6 or any other, provide a few extra benefits.
Explicit dependencies
Your example seems very bias toward a "load everything" approach, but you'll generally find that to be uncommon. If your components/header.js needs to use components/breadcrumbs.js, assumptions must be made. Has that file been bundled into the overall JS file? You have no way of knowing. You're two options are
Load everything
Maintain a file somewhere that explicitly lists what needs to be loaded.
The first option is easy and in the short term is probably fine. The second is complicated for maintainability because it would be maintained as an external list, it would be very easy to stop needing one of your component file but forget to remove it.
It also means that you are essentially defining your own syntax for dependencies when again, one has now been defined in the language/community.
What happens when you want to start splitting your application into pieces? Say you have an application that is a single large file that drives 5 pages on your site, because they started out simple and it wasn't big enough to matter. Now the application has grown and should be served with a separate JS file per-page. You have now lost the ability to use option #1, and some poor soul would need to build this new list of dependencies for each end file.
What if you start using a file in a new places? How do you know which JS target files actually need it? What if you have twenty target files?
What if you have a library of components that are used across your whole company, and one of they starts relying on something new? How would that information be propagated to any number of the developers using these?
Modules allow you to know with 100% certainty what is used where, with automated tooling. You only need to package the files you actually use.
Ordering
Related to dependency listing is dependency ordering. If your library needs to create a special subclass of your header.js component, you are no longer only accessing app.component.header() from app.routes.homepage(), which would presumable be running at DOMContentLoaded. Instead you need to access it during the initial application execution. Simple concatenation offers no guarantees that it will have run yet. If you are concatenating alphabetically and your new things is app.component.blueHeader() then it would fail.
This applies to anything that you might want to do immediately at execution time. If you have a module that immediately looks at the page when it runs, or sends an AJAX request or anything, what if it depends on some library to do that?
This is another argument agains #1 (Load everything) so you start having to maintain a list again. That list is again going to be a custom things you'll have come up with instead of a standardized system.
How do you train new employees to use all of this custom stuff you've built?
Modules execute files in order based on their dependencies, so you know for sure that the stuff you depend on will have executed and will be available.
Scoping
Your solution treats everything as a standard script file. That's fine, but it means that you need to be extremely careful to not accidentally create global variables by placing them in the top-level scope of a file. This can be solved by manually adding (function(){ ... })(); around file content, but again, it's one more things you need to know to do instead of having it provided for you by the language.
Conflicts
app.component.* is something you've chosen, but there is nothing special about it, and it is global. What if you wanted to pull in a new library from Github for instance, and it also used that same name? Do you refactor your whole application to avoid conflicts?
What if you need to load two versions of a library? That has obvious downsides if it's big, but there are plenty of cases where you'll still want to trade big for non-functional. If you rely on a global object, it is now up to that library to make sure it also exposes an API like jQuery's noConflict. What if it doesn't? Do you have to add it yourself?
Encouraging smaller modules
This one may be more debatable, but I've certainly observed it within my own codebase. With modules, and the lack of boilerplate necessary to write modular code with them, developers are encouraged to look closely on how things get grouped. It is very easy to end up making "utils" files that are giant bags of functions thousands of lines long because it is easier to add to an existing file that it is to make a new one.
Dependency webs
Having explicit imports and exports makes it very clear what depends on what, which is great, but the side-effect of that is that it is much easier to think critically about dependencies. If you have a giant file with 100 helper functions, that means that if any one of those helpers needs to depend on something from another file, it needs to be loaded, even if nothing is ever using that helper function at the moment. This can easily lead to a large web of unclear dependencies, and being aware of dependencies is a huge step toward thwarting that.
Standardization
There is a lot to be said for standardization. The JavaScript community has moved heavily in the direction of reusable modules. This means that if you hope into a new codebase, you don't need to start off by figuring out how things relate to eachother. Your first step, at least in the long run, won't be to wonder whether something is AMD, CommonJS, System.register or what. By having a syntax in the language, it's one less decision to have to make.
The long and short of it is, modules offer a standard way for code to interoperate, whether that be your own code, or third-party code.
Your current process is to concatenate everything always into a single large file, only ever execute things after the whole file has loaded and you have 100% control over all code that you are executing, then you've essentially defined your own module specification based on your own assumptions about your specific codebase. That is totally fine, and no-one is forcing you to change that.
No such assumptions can be made for the general case of JavaScript code however. It is precisely the objective of modules to provide a standard in such a way as to not break existing code, but to also provide the community with a way forward. What modules offer is another approach to that, which is one that is standardized, and one that offers clearer paths for interoperability between your own code and third-party code.
I've seen things like this in many projects. Server generates html page with <script> tag, and JS code in it, which defines some "constants", using templates.
For example.
<script>
window.CONSTANTS = {};
window.CONSTANTS.USER_ID = "<?= getUserId() ?>";
window.CONSTANTS.BASE_PATH = "<?= getBasePath() ?>";
</script>
The other way I can think of, would be letting client to make an ajax call, to get all the necessary data.
Assigning variables to the global namespace (in the case of the browser, "window.[somevariable]") is considered dangerous/evil/whatever.
You should always, as a matter of what they call "good practice", namespace your stuff. Lets say your application is called Jumpr (to give it a trendy hipster name), you might consider namespacing variables and modules that belong to your app under the global variable "Jumpr".
In the case of constants, I personally like assigning constants to an app.CONSTANTS namespace:
Jumpr.CONSTANTS
As far as dynamically outputting constants from the server, which is not a bad thing in my opinion (keeps your server constants that are shared with your application constants in sync), you could opt to import them via a constants script file, which then imports into your namespace like so:
Constants.js (this file could be auto-generated at request-time):
var Jumpr = Jumpr || {};
Jumpr.CONSTANTS = {
SOMECONST: "Some Constant Value 0"
SOMECONST1: "Some Constant Value 1"
}
...etc. But this is just one way of doing it. If you're unfamiliar with this pattern, what it's doing is checking for an already defined Jumpr module. If it's not defined, it makes a new object. Otherwise, it uses the existing one and "extends" it using the defined constants.
Also note these aren't true constants. To achieve the illusion of constants in JS you would need to create a get/set closure that would then act as a read-only retriever, but I'm not going to go into that here.
Edit
Since the context of the question has changed, I'm going to update my answer:
In my view, no, its not bad to have dynamically generated constants. Those constants, I would assume, are being generated from your back-end constants. You might want these environments to be in sync (you could capture everything from server routes, to form names, to whatever else in there). I personally think that this could be a good thing.
I would like to caveat that with the idea that you'll be taking a performance hit for doing that. I would rather recommend that you reference a constants file/module, which is then updated every time your constants on the server side change. It'll reduce computing power required to generate your pages and is worth the effort since it's generally easy to set this up, and constants rarely change (key word: they're constant) :)
Note, that's exactly what I already recommended above.
It is a bad practice. Global variables and functions can be overwritten by other scripts. Here there is a good example.
I've Heard Global Variables Are Bad, What Alternative Solution Should I Use?
http://www.w3schools.com/js/js_best_practices.asp
I have just started putting JSLint into my build pipeline and it has been great. Although it has pointed out something in most of my files that is not an error as such, although it will see it as one. I have changed my constructor to now take an instance of this object so the tests pass, however I am not sure if I really should, as in all other major languages I would not need to do this.
I will have to add some more context to this, for it to make any sense so here goes.
I have the closest thing I can get to an Enum in javascript which is basically a globally scoped JSON style variable with a load of constants, it is used to describe event types so rather than every class that wants to raise/listen to events having to put hard coded strings, it can just use a constant from this enum variable. As I just mentioned I have these classes that make use of this static enum, but they just make use of the global version of this variable, not a local instance passed through the constructor, and this is where my problems begin, as in the actual app I know for a fact that the enum file will included which will make it globally accessible. However JSLint has no context of this, so it only sees an individual file without worrying about external dependencies, as it deems these to be bad, which in any other language would be true, but in JS you cannot achieve the same thing without having global variables (to my knowledge).
As I said originally, I have now added this enum to the constructor to let JSLint pass the files, however it just feels a bit wrong passing it in, but maybe this is because I am thinking about it as a regular developer and not a javascript developer...
Now should I stick to this, and pass it through the constructor, and just mock it in my tests, or should I take the approach that it should always be there?
I am sure this will be down to peoples personal opinions, but it would be nice to know if I am being an idiot and should just keep each file as its own silo, or if there is a way for me to have my cake and eat it.
I found that JSLint supports a comment to tell it of global variables:
/*globals myGlobal*/
I have decided to just use my enum as a global and get on with the more important things.
JavascriptLint greatly improves on JSLint in this respect, as it allows you to define inter-file dependencies:
If a script references a variable, function, or object from another script, you will need to add a /*jsl:import PathToOtherScript*/ comment in your script. This tells JavaScript Lint to check for items declared in the other script. Relative paths are resolved based on the path of the current script.
See: http://javascriptlint.com/docs/index.htm