rails asset pipeline - Garber-Irish, Administration & keeping it DRY - javascript

I'm using the so called garber-irish technique for splitting up my javascript files.
My question is, I have a Model (Item say) and have an init function which is in app/assets/javascripts/item/item.js
e.g.
MYAPP.items = {
init: function() {
alert("do something");
}
};
Now.. lets say I have an administration side to this app, and I don't really want to include the admin javascript in the main bulk. So.. I have a different system_adminstration.js which requires the regular javascripts/item/item.js above, but also requires a javascripts/admin/item/item.js which would look something like:
MYAPP.items = {
init: function() {
alert("also do this");
}
};
I want to load both the common javascripts above, and the administation specific ones - effectively merging the two init functions and keeping things nicely dry.
Questions:
Is this a sensible approach?
It it possible?

Keen for comments - but what I have done (for the moment) is change the init function to :
UTIL.exec( "common" );
UTIL.exec( controller );
UTIL.exec( "admin_"+controller );
UTIL.exec( controller, action );
UTIL.exec( "admin_"+controller, action );
(so, I'm adding in an "admin_") and then for the admin javascript files I've simply added in an admin prefix:
MYAPP.admin_items = {
init: function() {
....
Slightly nasty but I think it'll do me until someone has a nicer suggestion!

Related

Shared JavaScript File with Different Definitions of a Function Call

I have a 300 line javascript file that sets up jQuery event handlers and other needed functions for a partial view that's used by multiple views within a ASP.NET MVC application. The event handlers handle 99% of everything identically regardless of which view is using the partial. This question is about that 1% difference.
Since JavaScript doesn't have interfaces is it safe to define a function to be called by one or more of the event handlers that processes the things that are different in a separate file that is loaded depending on which view is used? If not, what would be the best way to handle this situation? In other languages I'd use interfaces and/or abstract classes in this situation.
Example:
shared file
$(document).ready(function() {
//shared variables here for methods
$(document).on('click', '.selectable-table tbody tr', function() {
//do shared actions
mySpecificFunction();
//finish shared actions (if necessary)
});
});
Definition1.js
function mySpecificFunction() {
//do stuff
}
Definition2.js
function mySpecificFunction() {
//do other stuff
}
The views would load the appropriate scripts as such:
<script src="definitionX.js"></script>
<script src="sharedScript.js"></script>
The "signature" (term being used generously because javascript) of mySpecificFunction() would be the same for each definition, but something in my gut is telling me that this is bad practice. Is there a better/correct way to do this or a design pattern for this purpose?
I think you can use OOP approach here and you don't need the abstract classes or interfaces for that, instead you can use objects (which are more flexible than in other languages).
For example, you can have a base View prototype with shared code and then load specific view1.js, view2.js where the base prototype will be extended with specific code:
$(document).ready(function() {
// view is a view instance coming from the specific view.js
view.init();
});
// sharedScript.js, view prototype
var View = {
init: function() {
$(document).on('click', '.selectable-table tbody tr', function() {
// do shared actions
// ...
// do specific actions
this.mySpecificFunction();
});
},
mySpecificFunction: function() {
//do specific things, can be left empty in the "prototype" object
return;
}
};
// view1.js
var view = Object.create(View);
view.mySpecificFunction = function() {
alert('view 1');
}
// view2.js
var view = Object.create(View);
view.mySpecificFunction = function() {
alert('view 2');
}
And the views would load shared and specific scripts:
<script src="sharedScript.js"></script>
<script src="view1.js"></script>
This is just a rough idea which can be improved, for example, you may want to concatenate and compress all your js code into the single file for production. In this case the global view variable coming from view1.js, view2.js, etc would become a problem.
An improvement can be some kind of "router" which will detect what view should be instantiated:
$(document).ready(function() {
router.when('/', function() {
view = HomePageView();
}).when('/about', function() {
view = AboutPageView();
});
view.init();
});
The approach outlined above will work but it's not the best approach in terms of maintainability. Adding one file or another via a script tag to import the specific function
doesn't necessarily make it clear to another developer that you have actually changed the behaviour of the event handlers in the shared code.
A simple alternative could be that within each view you would wrap the partial view within a containing element that has an identifying css class to differentiate between the behaviour required at that point.
Then assign event handlers individually for those different css classes:
$(document).ready(function() {
//shared variables here for methods
$(document).on('click', 'div.type1 .selectable-table tbody tr', function() {
//do shared actions
mySharedActions();
mySpecificFunction1();
//finish shared actions (if necessary)
});
$(document).on('click', 'div.type2 .selectable-table tbody tr', function() {
//do shared actions
mySharedActions()
mySpecificFunction2();
//finish shared actions (if necessary)
});
});
This would allow you to keep all your specific functions together in one place and makes the changing behaviour predicated by the css class explicit
for future developers to see.

How to use multi-function modules with requireJS

I'm a Java programmer who's trying to convert some Applets into JavaScript. I have a case with lots of small Java classes, which have a cohesive design that I'd like to keep if possible in JavaScript. RequireJS seems like a great way to keep it modular.
However, all the tutorials for requireJS I've seen treat a module as only having a single function, which only works for a couple of my Java classes. For example, this tutorial defines the module purchase.js as having a single function purchaseProduct():
define(["credits","products"], function(credits,products) {
console.log("Function : purchaseProduct");
return {
purchaseProduct: function() {
var credit = credits.getCredits();
if(credit > 0){
products.reserveProduct();
return true;
}
return false;
}
}
});
Is there a way to allow more than one function per module in this way? In my head, a module is not necessarily just one function.
Or is RequireJS the wrong tree to be barking up for preserving my modular design from Java?
Edit
I found a much more useful set of examples for transitioning from Java to JS with requireJS at https://gist.github.com/jonnyreeves/2474026 and https://github.com/volojs/create-template
Yes, there's a way and it's very simple.
See, what you're returning (the module) is simply an object. In your code, this object has 1 property (purchaseProduct), but it can have as many as you'd like. And they don't all have to be functions either:
return {
purchaseProduct: function () {/*...*/},
returnProduct: function () {/*...*/},
discountLevel: 0.25
};
Just add more functions to the object you return:
return {
purchaseProduct: function() {
// ...
},
somethingElse: function () {
// ...
}
}
If you return this as the value of your module, you'll have a module that exports two functions purchaseProduct and somethingElse.

Collect multiple RequireJS dependencies

There's a set of JS modules representing similar OOP Classes: think of e.g. different types of backend tasks (SendEmailTask, WriteToDbTask, WriteToDiskTask), or different actions on a drawing canvas (DrawArc, DrawLine, DrawBezier). Those are just examples.
Each of those is a single JS file, with its own define, and they're all located in a common directory. In a client module that depends on all of those, the dependency list and argument list have to include each one of those separately, e.g. something like:
define([
'tasks/sendEmailTask', 'tasks/writeToDbTask', 'tasks/writeToDiskTask', ...
], function (SendEmailTask, WriteToDbTask, WriteToDiskTask, ...) {
/* ... */
/* ... new SendEmailTask(); */
/* ... new WriteToDbTask(); */
/* ... new WriteToDiskTask(); */
});
and they both must be updated everytime a new module is added to the set, e.g. MakeCoffeeTask, which seems to me as a BadThing™.
Is there a way to avoid those last issues? I was thinking of a couple of possible ways, but I don't know how to make them work:
Create a kind of namespace module. Each of those sub-modules depends on the namespace one and adds its definition to it. But than if the client too depends only on the namespace, how do you ensure that client gets loaded after all sub-modules?
Express both dependencies and arguments with some kind of wildcard, like 'tasks/*' for dependency and < no idea > for arguments.
You can make a different module for each section. For example: 'tasks/main'
in the Task main you can include all the files you need. and in the task main, all the function of task will be assigned. and you call high level functions. Take look :
define(['tasks/main'], function(task){
task.add('Project Status meeting');
task.SendEmailNotification('tAll');
});
in the task/main.js :
define(['tasks/sendEmailTask', 'tasks/writeToDbTask', 'tasks/writeToDiskTask',], function(sendEmailTask, writeToDbTask, writeToDiskTask){
var task = {
add = function(taskObj){
// write your code using writeToDbTask, writeToDiskTask
},
SendEmailNotification = function(whom){
// write your code using sendEmailTask, writeToDbTask, writeToDiskTask
};
return task;
});
does it help you ?
I was not able to find any other solution. This is what I came up with in the meantime... it is just an improvement over current situation, but I don't think it solves the posed issues.
// in tasks/taskNamespace.js or in any client code
define (function(require) {
var _taskArray = [
require('tasks/sendEmailTask'),
require('tasks/writeToDbTask'),
require('tasks/writeToDiskTask'),
/* ... , */
];
var TaskNamespace = {};
_taskArray.forEach(function (taskClass) {
TaskNamespace[taskClass.name] = taskClass;
});
return TaskNamespace;
});
It takes advantage of RequireJS Sugar syntax and of JS Function.name coming new feature, proposed in ES6.
What's the end result? You still have to manually add each Class to a list, but it's only one list instead of two. WHOA

Sharing JS files across application

We're developing a mobile application and we're trying to figure out the best approach to share javascript functions across the application.
At the moment we have individual files that simply have:
var LIB = {
URL: "http://localhost/service",
connect: function() {
// connect to server
$.ajax({ url: this.URL }); // etc etc
// call a private function?
this._somethingElse();
},
_somethingElse: function() {
// do something else
}
};
Then we simply call things like:
LIB.connect(); or LIB.disconnect();
across any file.
This also gives us access to LIB.URL as well.
My question is whether this approach is the best?
I've seen people use the following approach as well, but to what benefit?
var LIB = function () {
this.URL = "http://localhost/service";
this.connect = function () {
var myself = this;
// connect to server
$.ajax({ url: this.URL }); // etc etc
// call a private function?
myself._somethingElse(); // best way to invoke a private function?
};
this._somethingElse = function () {
// do something else
};
};
This requires the following:
var lib = new LIB();
lib.connect();
EDIT:
I've also seen the following:
window.lib = (function () {
function Library () {
}
var lib = {
connect: function () {
// connect to server
}
};
return lib;
}());
I'm slightly confused with all these options.
It just depends on which you like better. I (on a personal level) prefer the former, but to each his own. The latter does have the disadvantage of requiring to either remember to declare the new before using it, or having to keep track of a already created one.
Additionally, on a technical level the first one should get slightly (as in, barely noticeable) performance, as you don't have to compute a new function.
Edit Yes, the first way is definitely the fastest.
I highly recommend going with a module system. Until ES6 comes along (http://wiki.ecmascript.org/doku.php?id=harmony:modules), you will have to use a 3rd party library in order to do this.
Each object/class/util/etc is a module.
A module exports a public api, whereas consuming modules import other modules by declaring their dependencies.
Two "standards" that exist: AMD and CommonJS. In the browser, a library like RequireJS, which uses the AMD standard, is very popular. I recommend checking out their site first: http://requirejs.org/ and see their examples.
The main advantage here is that you only expose the public api, which allows you to create a sandbox of your functionality. It's also more explicit as it's really easy to see what your module depends on, instead of relying on global objects.
There are several different approaches to structuring JavaScript code for re-usability. You can research these and decide which is best. Me personally, I have used the second approach that you've outlined. However, I separate my code into sections and actually adhere to an MVVM structure. So for instance, I have a name space called models and view models. Each of my js files begins with:
window.[APP NAME].Models.[CLASS/MODULE NAME] or window.[APP NAME].Models.[CLASS/MODULE NAME]
So, let's say I have a namespace called mynamespace and I have a module/class called myclass. My js file would begin with:
window.MYNAMESPACE = window.MYNAMESPACE || {};
window.MYNAMESPACE.ViewModels = window.MYNAMESPACE.ViewModels || {};
window.MYNAMESPACE.ViewModels.MyClass = function () {
// a public function
this.func1 = function () {
};
// a private function
function func2() {
}
};
I would then consume that class by calling:
var myClassModel = new window.MYNAMESPACE.ViewModels.MyClass();
myClassModel.func1();
This gives you some nice encapsulation of your code. Some of the other patterns you can research/google are: Prototype Pattern, Module Pattern, Revealing Module Pattern, and the Revealing Prototype Pattern.
I hope that helps but if you have any questions on what I've just said, feel free to comment on this post.

understanding a modular javascript pattern

I'm trying to write 'better' javascript.
Below is one pattern I've found, and am trying to adopt. However, I'm slightly confused about its use.
Say, for example, I've got a page called "Jobs". Any JS functionality on that page would be encapsulated in something like:
window.jobs = (function(jobs, $, undefined){
return {
addNew: function(){
// job-adding code
}
}
})(window.jobs|| {}, jQuery);
$(function(){
$('.add_job').on('click', function(event){
event.preventDefault();
window.jobs.addNew();
});
});
As you can probably deduct, all I've done is replaced all the code that would have sat inside the anonymous event-handler function, with a call to a function in my global jobs object. I'm not sure why that's a good thing, other than it's reduced the possibility of variable collisions and made the whole thing a bit neater, but that's good enough for me.
The - probably fairly obvious - question is: all my event-binding init-type stuff is still sitting outside my shiny new jobs object: where should it be? Inside the jobs object? Inside the return object inside the jobs object? Inside an init() function?
I'm just trying to get a sense of a stable, basic framework for putting simple functionality in. I'm not building JS apps, I'd just like to write code that's a little more robust and maintainable than it is currently. Any and all suggestions are warmly welcomed :)
You can break down your application in whatever number of modules / objects you like too.
For instance, you can have another object / module which caches and defines all your DOM nodes and another one, which just handles any event. So for instance:
(function ( win, doc, $, undef ) {
win.myApp = win.myApp || { };
var eventHandler = {
onJobClick: function( event ) {
event.preventDefault();
myApp.addNew();
}
};
var nodes = (function() {
var rootNode = $( '.myRootNode' ),
addJob = rootNode.find( '.add_job' );
return {
rootNode: rootNode,
addJob: addJob
};
}());
$(function() {
myApp.nodes.addJob.on( 'click', myApp.handler.onJobClick );
});
myApp.nodes = nodes;
myApp.handler = eventHandler;
}( this, this.document, jQuery ));
It doesn't really matter how you create singletons in this (module) pattern, either as literal, constructor, Object.create() or whatnot. It needs to fit your requirements.
But you should try to create as many specific modules/objects as necesarry. Of course, if makes even more sense to separate those singletons / modules / objects into multiple javascript files and load them on demand and before you can say knife, you're in the world of modular programming patterns, dealing with requireJS and AMD or CommonJS modules.
Encapsulation-wise, you're fine: you could even just declare addNew in the jQuery closure and you'd still avoid the global scope. I think what you're getting at is more of implementing something close to an MVC architecture.
Something I like to do is create an object that you instantiate with a DOM element and that takes care of its own bindings/provides methods to access its controls etc.
Example:
// (pretend we're inside a closure already)
var myObj = function(args){
this.el = args.el; // just a selector, e.g. #myId
this.html = args.html;
this.bindings = args.bindings || {};
}
myObj.prototype.appendTo = function(elem){
elem.innerHTML += this.html;
this.bindControls();
};
myObj.prototype.remove = function(){
$(this.el).remove(); // using jQuery
};
myObj.prototype.bindControls = function(){
for(var i in this.bindings){ // event#selector : function
var boundFunc = function(e){ return this.bindings[i].call(this,e); };
$(this.el).on(i,boundFunc);
}
};
The way you are doing it right now is exactly how I do it also, I typically create the window objects inside the anonymous function itself and then declare inside that (in this case: jClass = window.jClass).
(function (jClass, $, undefined) {
/// <param name="$" type="jQuery" />
var VERSION = '1.31';
UPDATED_DATE = '7/20/2012';
// Private Namespace Variables
var _self = jClass; // internal self-reference
jClass = window.jClass; // (fix for intellisense)
$ = jQuery; // save rights to jQuery (also fixes vsdoc Intellisense)
// I init my namespace from inside itself
$(function () {
jClass.init('branchName');
});
jClass.init = function(branch) {
this._branch = branch;
this._globalFunctionality({ globalDatePicker: true });
this._jQueryValidateAdditions();
//put GLOBAL IMAGES to preload in the array
this._preloadImages( [''] );
this._log('*******************************************************');
this._log('jClass Loaded Successfully :: v' + VERSION + ' :: Last Updated: ' + UPDATED_DATE);
this._log('*******************************************************\n');
};
jClass._log = function() {
//NOTE: Global Log (cross browser Console.log - for Testing purposes)
//ENDNOTE
try { console.log.apply(console, arguments); }
catch (e) {
try { opera.postError.apply(opera, arguments); }
catch (e) { /* IE Currently shut OFF : alert(Array.prototype.join.call(arguments, ' '));*/ }
}
};
}(window.jClass= window.jClass|| {}, jQuery));
The reason I leave them completely anonymous like this, is that let's say in another file I want to add much more functionality to this jClass. I simply create another:
(function jClass, $, undefined) {
jClass.newFunction = function (params) {
// new stuff here
};
}(window.jClass = window.jClass || {}, jQuery))
As you can see I prefer the object.object notation, but you can use object literals object : object, it's up to you!
Either way by leaving all of this separate, and encapsulated without actual page logic makes it easier to have this within a globalJS file and every page on your site able to use it. Such as the example below.
jClass._log('log this text for me');
You don't want to intertwine model logic with your business logic, so your on the right path separating the two, and allowing for your global namespace/class/etc to be more flexible!
You can find here a comprehensive study on module pattern here: http://www.adequatelygood.com/JavaScript-Module-Pattern-In-Depth.html It covers all the aspects of block-scoped module approach. However in practice you gonna have quite a number files encapsulating you code, so the question is how to combine them property. AMD... multiple HTTP requests produced by every module loading will rather harm your page response time. So you can go with CommonJS compiled to a single JavaScript file suitable for in-browser use. Take a look how easy it is http://dsheiko.github.io/cjsc/

Categories