I'm working within a Javascript + BackboneJS (an MVC framework) + RequireJS framework, but this question is somewhat OO generic.
Let me start by explaining that in Backbone, your Views are a mix of traditional Views and Controllers, and your HTML Templates are the traditional MVC Views
Been racking my head about this for a while and I'm not sure what the right/pragmatic approach should be.
I have a User object that contains user preferences (like unit system, language selection, anything else) that a lot of code depends on.
Some of my Views do most of the work without the use of templates (by using 3rd party libs, like Mapping and Graphing libs), and as such they have a dependency on the User object to take care of unit conversion, for example. I'm currently using RequireJS to manage that dependency without breaking encapsulation too much.
Some of my Views do very little work themselves, and only pass on Model data to my templating engine / templates, which do the work and DO have a dependency on the User object, again, for things like units conversion. The only way to pass this dependency into the template is by injecting it into the Model, and passing the model into the template engine.
My question is, how to best handle such a widely needed dependency?
- Create an App-wide reference/global object that is accessible everywhere? (YUK)
- Use RequireJS managed dependencies, even though it's generally only recommended to use managed dependency loading for class/object definitions rather than concrete objects.
- Or, only ever use dependency injection, and manually pass that dependency into everything that needs it?
From a purely technical point of view, I would argue that commutable globals (globals that may change), especially in javascript, are dangerous and wrong. Especially since javascript is full of parts of code that get executed asynchronously. Consider the following code:
window.loggedinuser = Users.get("Paul");
addSomeStuffToLoggedinUser();
window.loggedinuser = Users.get("Sam");
doSomeOtherStuffToLoggedinUser();
Now if addSomeStuffToLoggedinUser() executes asynchronously somewhere (e.g. it does an ajax call, and then another ajax call when the first one finishes), it may very well be adding stuff to the new loggedinuser ("Sam"), by the time it gets to the second ajax call. Clearly not what you want.
Having said that, I'm even less of a supporter of having some user object that we hand around all the time from function to function, ad infinitum.
Personally, having to choose between these two evils, I would choose a global scope for things that "very rarely change" --- unless perhaps I was building a nuclear powerstation or something. So, I tend to make the logged in user available globally in my app, taking the risk that if somehow for some reason some call runs very late, and I have a situation where one user logs out and directly the other one logs in, something strange may happen. (then again, if a meteor crashes into the datacenter that hosts my app, something strange may happen as well... I'm not protecting against that either). Actually a possible solution would be to reload the whole app as soon as someone logs out.
So, I guess it all depends on your app. One thing that makes it better (and makes you feel like you're still getting some OO karma points) is to hide your data in some namespaced singleton:
var myuser = MyApp.domain.LoggedinDomain.getLoggedinUser();
doSomethingCoolWith(myuser);
in stead of
doSomethingCoolWith(window.loggedinuser);
although it's pretty much the same thing in the end...
I think you already answered your own question, you just want someone else to say it for you : ) Use DI, but you aren't really "manually" passing that dependency into everything since you need to reference it to use it anyways.
Considering the TDD approach, how would you test this? DI is best for a new project, but JS gives you flexible options to deal with concrete global dependencies when testing, ie: context construction. Going way back, Yahoo laid out a module pattern where all modules were loosely coupled and not dependent on each other, but that it was ok to have global context. That global context can make your app construction more pragmatic for things that are constantly reused. Its just that you need to apply that judiciously/sparingly and there need be very strong cases for those things being dynamic.
Related
I'm using a query on both server and client (pub/sub). So I have something like this at a few different locations.
const FOO = 'bar';
Collection.find({property:FOO})
Foo may potentially change and rather than have to update my code at separate locations, I was thinking it may be worth it to abstract this away to a global variable that is visible by both client and server.
I created a new file 'lib/constants.js' and simply did FOO = 'bar; (note no keyword). This seems to work just fine. I found this solution as the accepted answer How can I access constants in the lib/constants.js file in Meteor?
My question is if this a desired pattern in Meteor and even general JS.
I understand I can abstract this away into a module, but that may be overkill in this case. I also think using session/reactive vars is unsafe as it can kinda lead to action at a distance. I'm not even gonna consider using settings.json as that should only be for environment variables.
Any insights?
yes, If you are using older version of meteor then you can use setting.json but for updated version we have import option.
I don't think the pattern is that bad. I would put that file in /imports though and explicitly import it.
Alternatively, you can write into Meteor.settings.public from the server, e.g., on start-up, and those values will be available on the client in the same location. You can do this without having a settings file, which is nice because it doesn't require you to make any changes between development and production.
Server:
Meteor.startup(() => {
// code to run on server at startup
Meteor.settings.public.FOO = 'bar';
});
Client:
> console.log(Meteor.settings.public.FOO);
bar
This is actually a b̶a̶d̶ unfavoured pattern because with global variables you cannot track what things changed and in general constructing a modular and replaceable components is much better. This pattern was only made possible due to Meteor early days where imports directory/pattern was not supported yet and you'd have your entire code split up between both,server and client.
https://docs.meteor.com/changelog.html#v13220160415
You can find many write ups about it online and event stackoverflow answers so I don't want to restate the obvious.
Using a settings.json variable is not an option since we may dynamically change so what are our options? For me I'd say:
Store it the database and either publish it or retrieve it using methods with proper access scoping of course. Also you can dynamically modify it using methods that author DB changes.
Or, you may try using Meteor.EnvironmentVariable. I'd be lying if I said I know how to use it properly but I've seen it being used in couple Meteor projects to tackle a similar situation.
https://www.eventedmind.com/items/meteor-dynamic-scoping-with-environment-variables
Why are global variables considered bad practice?
I don't understand WHY and in what scenario this would be used..
My current web setup consists of lots of components, which are just functions or factory functions, each in their own file, and each function "rides" the app namespace, like : app.component.breadcrumbs = function(){... and so on.
Then GULP just combines all the files, and I end up with a single file, so a page controller (each "page" has a controller which loads the components the page needs) can just load it's components, like: app.component.breadcrumbs(data).
All the components can be easily accessed on demand, and the single javascript file is well cached and everything. This way of work seems extremely good, never saw any problem with this way of work. of course, this can (and is) be scaled nicely.
So how are ES6 imports for functions any better than what I described?
what's the deal with importing functions instead of just attaching them to the App's namespace? it makes much more sense for them to be "attached".
Files structure
/dist/app.js // web app namespace and so on
/dist/components/breadcrumbs.js // some component
/dist/components/header.js // some component
/dist/components/sidemenu.js // some component
/dist/pages/homepage.js // home page controller
// GULP concat all above to
/js/app.js // this file is what is downloaded
Then inside homepage.js it can look like this:
app.routes.homepage = function(){
"use strict";
var DOM = { page : $('#page') };
// append whatever components I want to this page
DOM.page.append(
app.component.header(),
app.component.sidemenu(),
app.component.breadcrumbs({a:1, b:2, c:3})
)
};
This is an extremely simplified code example but you get the point
Answers to this are probably a little subjective, but I'm going to do my best.
At the end of the day, both methods allow support creating a namespace for a piece of functionality so that it does not conflict with other things. Both work, but in my view, modules, ES6 or any other, provide a few extra benefits.
Explicit dependencies
Your example seems very bias toward a "load everything" approach, but you'll generally find that to be uncommon. If your components/header.js needs to use components/breadcrumbs.js, assumptions must be made. Has that file been bundled into the overall JS file? You have no way of knowing. You're two options are
Load everything
Maintain a file somewhere that explicitly lists what needs to be loaded.
The first option is easy and in the short term is probably fine. The second is complicated for maintainability because it would be maintained as an external list, it would be very easy to stop needing one of your component file but forget to remove it.
It also means that you are essentially defining your own syntax for dependencies when again, one has now been defined in the language/community.
What happens when you want to start splitting your application into pieces? Say you have an application that is a single large file that drives 5 pages on your site, because they started out simple and it wasn't big enough to matter. Now the application has grown and should be served with a separate JS file per-page. You have now lost the ability to use option #1, and some poor soul would need to build this new list of dependencies for each end file.
What if you start using a file in a new places? How do you know which JS target files actually need it? What if you have twenty target files?
What if you have a library of components that are used across your whole company, and one of they starts relying on something new? How would that information be propagated to any number of the developers using these?
Modules allow you to know with 100% certainty what is used where, with automated tooling. You only need to package the files you actually use.
Ordering
Related to dependency listing is dependency ordering. If your library needs to create a special subclass of your header.js component, you are no longer only accessing app.component.header() from app.routes.homepage(), which would presumable be running at DOMContentLoaded. Instead you need to access it during the initial application execution. Simple concatenation offers no guarantees that it will have run yet. If you are concatenating alphabetically and your new things is app.component.blueHeader() then it would fail.
This applies to anything that you might want to do immediately at execution time. If you have a module that immediately looks at the page when it runs, or sends an AJAX request or anything, what if it depends on some library to do that?
This is another argument agains #1 (Load everything) so you start having to maintain a list again. That list is again going to be a custom things you'll have come up with instead of a standardized system.
How do you train new employees to use all of this custom stuff you've built?
Modules execute files in order based on their dependencies, so you know for sure that the stuff you depend on will have executed and will be available.
Scoping
Your solution treats everything as a standard script file. That's fine, but it means that you need to be extremely careful to not accidentally create global variables by placing them in the top-level scope of a file. This can be solved by manually adding (function(){ ... })(); around file content, but again, it's one more things you need to know to do instead of having it provided for you by the language.
Conflicts
app.component.* is something you've chosen, but there is nothing special about it, and it is global. What if you wanted to pull in a new library from Github for instance, and it also used that same name? Do you refactor your whole application to avoid conflicts?
What if you need to load two versions of a library? That has obvious downsides if it's big, but there are plenty of cases where you'll still want to trade big for non-functional. If you rely on a global object, it is now up to that library to make sure it also exposes an API like jQuery's noConflict. What if it doesn't? Do you have to add it yourself?
Encouraging smaller modules
This one may be more debatable, but I've certainly observed it within my own codebase. With modules, and the lack of boilerplate necessary to write modular code with them, developers are encouraged to look closely on how things get grouped. It is very easy to end up making "utils" files that are giant bags of functions thousands of lines long because it is easier to add to an existing file that it is to make a new one.
Dependency webs
Having explicit imports and exports makes it very clear what depends on what, which is great, but the side-effect of that is that it is much easier to think critically about dependencies. If you have a giant file with 100 helper functions, that means that if any one of those helpers needs to depend on something from another file, it needs to be loaded, even if nothing is ever using that helper function at the moment. This can easily lead to a large web of unclear dependencies, and being aware of dependencies is a huge step toward thwarting that.
Standardization
There is a lot to be said for standardization. The JavaScript community has moved heavily in the direction of reusable modules. This means that if you hope into a new codebase, you don't need to start off by figuring out how things relate to eachother. Your first step, at least in the long run, won't be to wonder whether something is AMD, CommonJS, System.register or what. By having a syntax in the language, it's one less decision to have to make.
The long and short of it is, modules offer a standard way for code to interoperate, whether that be your own code, or third-party code.
Your current process is to concatenate everything always into a single large file, only ever execute things after the whole file has loaded and you have 100% control over all code that you are executing, then you've essentially defined your own module specification based on your own assumptions about your specific codebase. That is totally fine, and no-one is forcing you to change that.
No such assumptions can be made for the general case of JavaScript code however. It is precisely the objective of modules to provide a standard in such a way as to not break existing code, but to also provide the community with a way forward. What modules offer is another approach to that, which is one that is standardized, and one that offers clearer paths for interoperability between your own code and third-party code.
Let's say I have a rails app with a resource - User. I have javascript that should be available for any page that is served. I have javascript that should be available for any page that is served under User. And I have javascript that should be available for each specific action under User. In Rails 3.1 and higher, is there an easy way to make sure that my Javascript is only available to the pages that require it? What about coffeescript?
I think the linked item from Bob is relevant (there's a comment relating to trade-off of performance to number of files loaded), but I saw the question as being more about name spacing, scoping and structure.
To specifically answer the question (and assuming you're using jQuery), consider the following CoffeeScript:
$ ->
doSomething()
doSomethingElse("#some-element")
doSomething = ->
alert("I'm doing something")
doSomethingElse = (selector) ->
alert("I'm hiding something")
$(selector).hide()
The CoffeeScript compiler will wrap all of this within an anonymous function, and thus will only be available within a context from which the page is loaded (a script tag, or a controller-specific file, or the application.js for global visibility).
There are a couple of models to consider. A straightforward one is to follow the pattern of having "things" that are specific to a model, and those that are generally useful (global). So if I want a javascript function that's specific to User, then it goes in app/assets/javascripts/users.js.coffee, otherwise it needs to be global (in application.js.coffee).
A much more complete and complex solution is suggested by the rails-backbone gem, which has generators that create CoffeeScript models, views, templates and routers that replace a lot of what we would get with regular rails generate scaffold foo -- the same kinds of CRUD operations are done in an entirely different way, and the templates (embedded javascript) in particular are quite similar to ERB templates. This is more of a leap of faith, for me.
Whether in an application-wide file, or a controller-specific one or ones, in either case, the Asset Pipeline will glom all of the code together and send it all to the user (assuming you retain the default configuration), but that's a separate topic.
Not sure if this answers the question, you had, but I think it's important to distinguish between the delivery of assets, which Asset Pipeline does, and the execution of javascript, which is a matter of scoping, something CoffeeScript does a very nice job of, and which backbone.js takes even further.
Hey all, I'm looking at building an ajax-heavy site, and I'm trying to spend some time upfront thinking through the architecture.
I'm using Code Igniter and jquery. My initial thought process was to figure out how to replicate MVC on the javascript side, but it seems the M and the C don't really have much of a place.
A lot of the JS would be ajax calls BUT I can see it growing beyond that, with plenty of DOM manipulation, as well as exploring the HTML5 clientside database. How should I think about architecting these files? Does it make sense to pursue MVC? Should I go the jquery plugin route somehow? I'm lost as to how to proceed and I'd love some tips. Thanks all!
I've made an MVC style Javascript program. Complete with M and C. Maybe I made a wrong move, but I ended up authoring my own event dispatcher library. I made sure that the different tiers only communicate using a message protocol that can be translated into pure JSON objects (even though I don't actually do that translation step).
So jquery lives primarily in the V part of the MVC architecture. In the M, and C side, I have primarily code which could run in the stand alone CLI version of spidermonkey, or in the serverside rhino implementation of javascript, if necessary. In this way, if requirements change later, I can have my M and C layers run on the serverside, communicating via those json messages to the V side in the browser. It would only require some modifications to my message dispatcher to change this though. In the future, if browsers get some peer to peer style technologies, I could get the different teirs running in different browsers for instance.
However, at the moment, all three tiers run in a single browser. The event dispatcher I authored allows multicast messages, so implementing an undo feature now will be as simple as creating a new object that simply listens to the messages that need to be undone. Autosaving state to the server is a similar maneuver. I'm able to do full detailed debugging and profiling inside the event dispatcher. I'm able to define exactly how the code runs, and how quickly, when, and where, all from that central bit of code.
Of course the main drawback I've encountered is I haven't done a very good job of managing the complexity of the thing. For that, if I had it all to do over, I would study very very carefully the "Functional Reactive" paradigm. There is one existing implementation of that paradigm in javascript called flapjax. I would ensure that the view layer followed that model of execution, if not used specifically the flapjax library. (i'm not sure flapjax itself is such a great execution of the idea, but the idea itself is important).
The other big implementation of functional reactive, is quartz composer, which comes free with apple's developer tools, (which are free with the purchase of any mac). If that is available to you, have a close look at that, and how it works. (it even has a javascript patch so you can prototype your application with a prebuilt view layer)
The main takaway from the functional reactive paradigm, is to make sure that the view doesn't appear to maintain any kind of state except the one you've just given it to display. To put it in more concrete terms, I started out with "Add an object to the screen" "remove an object from the screen" type messages, and I'm now tending more towards "display this list of objects, and I'll let you figure out the most efficient way to get from the current display, to what I now want you to display". This has eliminated a whole host of bugs having to do with sloppily managed state.
This also gets around another problem I've been having with bugs caused by messages arriving in the wrong order. That's a big one to solve, but you can sidestep it by just sending in one big package the final desired state, rather than a sequence of steps to get there.
Anyways, that's my little rant. Let me know if you have any additional questions about my wartime experience.
At the risk of being flamed I would suggest another framework besides JQuery or else you'll risk hitting its performance ceiling. Its ala-mode plugins will also present a bit of a problem in trying to separate you M, V and C.
Dojo is well known for its Data Stores for binding to server-side data with different transport protocols, and its object oriented, lighting fast widget system that can be easily extended and customized. It has a style that helps guide you into clean, well-divisioned code – though it's not strictly MVC. That would require a little extra planning.
Dojo has a steeper learning curve than JQuery though.
More to your question, The AJAX calls and object (or Data Store) that holds and queries this data would be your Model. The widgets and CSS would be your View. And the Controller would basically be your application code that wires it all together.
In order to keep them separate, I'd recommend a loosely-coupled event-driven system. Try to directly access objects as little as possible, keeping them "black boxed" and get data via custom events or pub/sub topics.
JavaScriptMVC (javascriptmvc.com) is an excellent choice for organizing and developing a large scale JS application.
The architecture design is very practical. There are 4 things you will ever do with JavaScript:
Respond to an event
Request Data / Manipulate Services (Ajax)
Add domain specific information to the ajax response.
Update the DOM
JMVC splits these into the Model, View, Controller pattern.
First, and probably the most important advantage, is the Controller. Controllers use event delegation, so instead of attaching events, you simply create rules for your page. They also use the name of the Controller to limit the scope of what the controller works on. This makes your code deterministic, meaning if you see an event happen in a '#todos' element you know there has to be a todos controller.
$.Controller.extend('TodosController',{
'click' : function(el, ev){ ... },
'.delete mouseover': function(el, ev){ ...}
'.drag draginit' : function(el, ev, drag){ ...}
})
Next comes the model. JMVC provides a powerful Class and basic model that lets you quickly organize Ajax functionality (#2) and wrap the data with domain specific functionality (#3). When complete, you can use models from your controller like:
Todo.findAll({after: new Date()}, myCallbackFunction);
Finally, once your todos come back, you have to display them (#4). This is where you use JMVC's view.
'.show click' : function(el, ev){
Todo.findAll({after: new Date()}, this.callback('list'));
},
list : function(todos){
$('#todos').html( this.view(todos));
}
In 'views/todos/list.ejs'
<% for(var i =0; i < this.length; i++){ %>
<label><%= this[i].description %></label>
<%}%>
JMVC provides a lot more than architecture. It helps you in ever part of the development cycle with:
Code generators
Integrated Browser, Selenium, and Rhino Testing
Documentation
Script compression
Error reporting
I think there is definitely a place for "M" and "C" in JavaScript.
Check out AngularJS.
It helps you with your app structure and strict separation between "view" and "logic".
Designed to work well together with other libs, especially jQuery.
Full testing environment (unit, e2e) + dependency injection included, so testing is piece of cake with AngularJS.
There are a few JavaScript MVC frameworks out there, this one has the obvious name:
http://javascriptmvc.com/
The problem: I have a jQuery heavy page that has a built in admin interface. The admin functions only trigger when an admin variable is set. These functions require a second library to work properly and the second file is only included if the user is an admin when the page is first created. The functions will never trigger for normal users and normal users do not get the include for the second library.
Is it bad to reference a function does not exist in the files currently included even if that function can never be called? (does that make sense :)
Pseudocode:
header: (notice that admin.js is not included)
<script type="text/javascript" src="script.js"></script>
<script type="text/javascript" src="user.js"></script>
script.js: (admin functions referenced but can't be executed)
admin = false; // Assume this
$(".something").dblclick(function(){
if(admin)
adminstuff(); // Implemented in admin.js (not included)
else
userstuff();
});
Ideas:
I suppose two separate files for users and admins could be used but I feel that would be an overly complicated solution (don't want to maintain two large files with only a few lines of difference). The only reason I include a reference to the admin function in this file is I need to attach it to page elements that get refreshed as a part of the script. When jQuery refreshes the page I need to reattach function to interactive elements.
The Question:
I want to keep things very simple and not have to include file I don't have to if they will not be used by the user. Is this a good way to do this or should I be going another route?
The code should operate without error, since the admin functions without implementation will not be called. The only thing that is really being wasted is bandwidth to transmit the admin code that is not used.
However, let me caution against security through obscurity. If the user were to view this code and see that there are admin functions that they cannot access, they might get curious and try to download the "admin.js" file and see what these functions do. If your only block to keeping admin functions from being performed is to stop including the file, then some crafty user will probably quickly find a way to call the admin functions when they should not be able to.
If you already do server side authentication/permissions checking for the admin function calls just ignore my previous paragraph :-)
Personally, I would bind (or re-bind) the event in admin.js:
$(function() {
$(".something").dblclick(function(){
adminstuff();
});
});
function adminstuff()
{
// ...
}
That way, the adminstuff() call and the function will not be visible to "normal" users.
Good question. It shouldn't cause any JavaScript problems.
Other things to consider: you are potentially exposing your admin capabilities to the world when you do this, which might be useful to hackers. That's probably not much of a concern, but it is something to be aware of.
See also:
Why can I use a function before it’s defined in Javascript?
I don't think it matters. If it makes you feel better, you can make an empty stub function.
I don't think there's a dogmatic answer to this in my opinion. What you're doing is...creative. If you're not comfortable with it, that could be a sign to consider other options. But if you're even less comfortable with those then that could be a sign this is the right thing (or maybe the least wrong thing) to do. Ultimately you could mitigate the confusion by commenting the heck out of that line. I wouldn't let yourself get religious over best practices. Just be willing to stand by your choice. You've justified it to me, anyway.
Javascript is dynamic - it shouldn't care if the functions aren't defined.
If you put your admin functions in a namespace object (probably a good practice anyway), you have a couple of options.
Check for the existence of the function in the admin object
Check for the existence of the admin object (possibly replacing your flag)
Have an operations object instead, where the admin file replaces select functions when it loads. (Even better, use prototypical inheritance to hide them.)
I think you should be wary that you are setting yourself up for massive security issues. It is pretty trivial in firebug to change a variable such as admin to "true", and seeing as admin.js is publically accessible, its not enough to simple not include it on the page, as it is also simple to add another script tag to the page with firebug. A moderately knowledgeable user could easily give themselves admin rights in this scenario. I don't think you should ever rely on a purely client side security model.