On javascript software breaking down structure guidelines - javascript

I'm trying to refactor my old Colorchess (ChessHighlight) program. It's a chess board aimed to enhance influences of chessmen on each turn to help beginners understanding the game.
According to pressure balance on board at a given turn, tiles are colorized as follows :
green = no pressure
white = white player owns the tile
black = black player owns the tile
a color picked from a gradient green-yellow-orange-red : conflictual situation for this tile
AI's in project but for the moment I focus on making this game playable correctly along devices, in both situations : table gaming on tablet -or- network gaming.
I decide to code the client side in javascript, I love it ! and servers syncing will be in PHP, since my actual hosting environment is under.
My questions and though comes when I try to put all together :
(client-side libraries)
- RequireJS --> loading files
- KnockoutJS --> binding UI events
- ICanHaz --> templating
- Zepto --> DOM manipulating
- and maybe underscoreJS for utilities
I'm worry about making spaghetti code, difficult to understand and maintain.
In the old program, ChessHighlight, there was lots of interlaced construct declarations and prototype extensions, for example :
// file board.js
function Board() { ... }
function Tile() { ... }
// next included file :
function Chessman() { ... }
// again in a third included file
// since board and chessmen are defined
Tile.prototype.put(chessman) { ... }
Tile.prototype.empty() { ... }
due to nature highly-coupled of the game, each file inclusion in the stack carry more and more definitions, and code became messy...
A difficulty is that the game need transactional implementation since I did setters like :
// both... (think ACID commit in a RDBMS)
tile_h8.chessman = rook_white_1;
rook_white_1.tile = tile_h8;
Now I solve -partially- this issue by creating a "Object Relational Pool Manager" which is intended to store :
references to objects of any kind (Board, Chessman, Tile, Influence...)
the relations between objects
and appreciably some type checks and cardinality/arity summing
(I'm baking the code at this time)
SOME QUESTIONS :
How to write extensible code (elegantly, no classes and interface simulation, rather prototypes and closures) in a way that you define basic atoms of code : Tile, Board, Chessman in very short files, and then gluing them together in an other part : adding behavior wirh callbacks ?
NOTE : I try to read game engines code (Crafty, Quintus...) but Core of these engines (1600 lines of code), although they are well documented, are difficult to understand (where is the starting point ? where are runtime flows ??)
UML : I have the feeling that classical methodologies could rapidly fail with closures, callbacks and nested scopes, I seems to be instinctive to write and understand, but drawing dragrams seems to be a trick... what good JS developers use as a safety rope to climb 1500+ line-of-code peaks ?
and the last : I would have an engine API "jquery-like" to plug it easily into computed observables the KnockOut ViewModels of the GUI. Something like this
[code]
var colorchess = new Colorchess( my_VM_for_this_DIV_part );
colorchess.reset( "standard-game" );
colorchess("a1") --> a wrapper for "a1" tile
colorchess("h8").chessman() --> a wrapper for "h8" tile's chessman (rook)
// iterate on black chessman
colorchess("black").each( function( ref, chessman) {})
// a chainable example
colorchess("white").chessman("queen").influences()
[/code]
... but for for moment, I don't know exactly how to model, write and test those kind of mutable objects.
Suggestions are welcome. Thanks for help.

I don't think that defining your objects in constructor functions is bad. Defining them as closures is worse because it'll consume more resources and isn't as easily optimized by JS engines.
Tightly coupled objects is a problem you'll have with closures as well, you can use mediator pattern for that. Mediator can complicate your code but you can easily control the application flow after you set it up.
Having good tools for your JS project is important in big projects, I've tried some; like Google closure compiler with Eclipse, Visual Studio with typescript and now trying Dart (google Dart) with Dart editor (=Eclipse with right plugins). This can help you spot inconsistencies quickly and refactor easier. Typescript would be the only one where it would be easy to use JS libraries because Typescript is JS with optional extensions. Not sure how AngularJS is coming along with the Dart port but that is something worth looking at.
As for ACID, if you're talking about updating the sate of your client side object in a way that it reflect the data in the server's database. You can use promise instead of callbacks. That will improve readability of XHR updates. Write your PHP so you'll have update messages with the desired state of both the piece and the tile so it can change the data in a transaction.

Related

Domain Driven Design for Node.js

I'm currently in the process of architecting a Node.js microservice-based application, and I'd like to make use of Domain Driven Design as a guideline for structuring the individual services. I have a few questions on this as follows:
As far as I understand, the domain layer should typically contain repository interfaces, and specific implementations of these interfaces should then be created in your infrastructure layer. This abstracts the underlying storage/technical concerns out of your domain layer. In the context of my project, seeing as JavaScript does not natively support things like interfaces, how would one go about achieving a similar effect?
One service in particular will be handling authentication by means of OAuth. Would one typically classify OAuth-related logic as an application service? Or would it fall under the infrastructure layer? My understanding is that it's not infrastructure-related, nor is it related to the core domain, but is still required as part of serving the application to clients. Hence, I've placed it within the application layer for now.
Following on from point 2, where would OAuth-related entities/repositories (i.e. tokens/clients) best be placed? I'm tempted to keep them in the domain layer along with my other entities/repositories, even though they aren't technically a business concern.
Some notes to add to the above:
I'm not keen on using TypeScript (as suggested here).
I am fully aware that some may consider JavaScript as being non-suitable for DDD-type approaches. I'm familiar with some of the pitfalls, and once again, I'm using DDD as a guideline.
On a side note, I have come up with the following project structure, which was heavily inspired by this great example of a DDD-based project in Laravel:
app/
----app.js
----package.json
----lib/
--------app/
------------service/
----------------oauth.js
----------------todo.js
--------domain/
------------list/
----------------model.js
----------------repository.js
----------------service.js
------------task/
----------------model.js
----------------repository.js
----------------service.php
--------http/
------------controller/
----------------list.js
----------------task.js
------------middleware/
----------------auth.js
----------------error.js
--------infrastructure/
------------db.js
------------logger.js
------------email.js
I would love to hear any thoughts you may have on this layout. I'm fully aware that the topic of project structure is somewhat opinion-based, but I'm always keen to hear what others have to say.
Have you considered wolkenkit?
It's a CQRS and event-sourcing framework for Node.js and JavaScript, that works pretty well with domain-driven design (DDD). It might give you a good idea of how to structure your code, and also a runtime to execute things without having to re-invent the wheel.
I know the guys behind it and they invested 3-4 years of thoughts, blood and sweat into this.
Domain-Driven Design guides decomposition of a system into a set of bounded contexts/services/microservices. However, the way you design each service is individual and depends on the service's business domain. For example, your business's core domain services and supporting domain services should be architected in different ways.
Even if this question is quite old, I think it will be useful to add a precision on your first interrogation :
As far as I understand, the domain layer should typically contain repository interfaces, and specific implementations of these interfaces should then be created in your infrastructure layer. This abstracts the underlying storage/technical concerns out of your domain layer. In the context of my project, seeing as JavaScript does not natively support things like interfaces, how would one go about achieving a similar effect?
JavaScript does not provide interface. Instead of mimicking OOP concepts, why not have a look to more functional ones? Higher-order functions are very well suited for javascript, you can use them to "declare" dependencies and inject them at runtime :
const { FooData } = require('./data');
const getFooOfId = (getFooOfIdImpl = async (fooId) => { throw new Error(`Can't retrieved foo of id ${fooId} : missing implementation`) }) => async fooId => {
try {
const fooData = await getFooOfIdImpl(fooId);
return FooData(fooData);
} catch (err) {
throw new Error(`Unable to retrieve Foo with id ${fooId}`);
}
}
/*
This function is used to build a concrete implementation, for example with an in-memory database :
const inMemoryFooDatabase = {
foo17: {
id: 'foo17',
foo: 'a foo value',
bar: 'a bar value',
foobaz: 1234,
},
};
const getFooOfIdFromInMemoryDatabase = getFooOfId(fooId => inMemoryFooDatabase[fooId])
*/
module.exports = {
getFooOdId,
}
Nothing prevents you to bypass totally this function because there is no strong type-checking in javascript, but it acts as a declaration of your domain "interfaces" needs.
If you want to learn more about this, you can read my post on this subject : Domain-Driven Design for JavaScript developers

Why is the interaction between a c++ add-on and javascript expensive in node.js?

Edit Please answer if you have knowledge of node's inner workings.
I've been really interested lately in diving into c++ add-ons for node. I've read a lot of articles on the subject that basically state you should reduce chatter between c++ and node. I understand that as a principle. What I'm really looking for is the why.
Again, why is the interaction between a c++ add-on and javascript expensive in node.js?
I can't speak to any knowledge of the inner workings of Node, but really it would boil down to this: the less stuff your code does, the faster (less expensive) it will be. Interacting with the Node API from the add-on is doing more stuff than not interacting with the API. For (a contrived) example, consider the difference between adding two integers in C++:
int sum = a + b;
...and evaluating an add expression in the JavaScript engine, which would involve creating a function variable, wrapping any arguments to this function, invoking the function, and unwrapping the return value.
The required wrapping/unwrapping/value conversions for C++/JS communication itself is justification enough to try to minimize the number of interactions between the two layers. Keep in mind that the C++ type system and JS type system are really different, what with JS not having ints, strings being implemented differently, objects working differently, etc.

Is the size of a JavaScript object affected by the number of methods in the (GWT) base-class?

I'm currently designing some API that will eventually be used in GWT, and I'm wondering if the number of methods in a Java class affects the size of an individual Object instance in JavaScript, after compiling the Java code using GWT.
Expressed another way, if I have an abstract base class with say 200 methods, and then sub-classes that override 2 of those, will the "cost" (memory usage) of those 200 methods be paid once for the application, or once per subclass instance?
In Java, the number of methods does not affect the Object size, but I don't know how this works in JavaScript.
The "200" number comes from trying to work-around the lack of Reflection in GWT, but I'd still be interested in the answer even if there was a way to get "fake reflection".
For this kind of question, there will not be an answer on Stackoverflow that beats your own experiments:
Write a class with 200 methods, then write one that subclasses it and overrides 2 methods. Compare JS code sizes to get a basic idea (though this is not the same as instance sizes). Use the Compile Report to get a better understanding. Then try compiling with style PRETTY or DETAILED, open the result in an editor, and verify if the generated code reuses methods or not. (Maybe also try it in OBFUSCATED mode to be sure).
Or, instantiate lots of objects, and inspect memory usage (several browsers offer tools, e.g. https://developers.google.com/chrome-developer-tools/docs/javascript-memory-profiling)
Generally, make sure that your methods do get called at all - otherwise, the compiler will optimize them away.

Is it better to have one large object in JavaScript or many smaller ones?

I'm writing a JS lib that reads chess games to turn them into re playable games, in one single web page there can be many games (one in its own div), what I'm wondering is – thinking about performance – if it's better to have one large object that holds all the moves of all the games or many smaller objects each storing the moves of one game.
I realize that this is perhaps a small point in the whole optimization process but its the one that I want to address now.
Donald Knuth: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil"
Start by designing a data model for your game that is right and natural from a domain modelling point of view.
Build the software.
Then, when you're at a stage in development that analysing performance makes sense, work out some low performance environments you'd like your game to work in, and test in them.
You'll probably find you have no issues, as others have stated in answers to this question.
If you find an issue, profile your code and discover the cause. Optimise the code you need to. Even if you find a performance issue, it's unlikely to be caused by your data model, as long as you've designed it well.
If data model really is a performance issue, only then should you compromise your initial design only to the extent you need to in order to get the performance you need, documentating the compromises you've made, and why you had to.
The best way to take advantage of objects is to use the prototypal pattern in JavaScript. This means you share data with objects, rather than being stingy and saying "That's mine!".
This pattern is all over the place in JavaScript libraries. It looks something like this:
var Template = {
extend: function () {
var F = function () {};
F.prototype = this.prototype;
var instance = new F();
instance.mixin.apply(instance, arguments);
return instance;
},
mixin: function (mixins) {
for (var i = 0, len = arguments.length; i < len; i++) {
for (var k in arguments[i]) if (arguments[i].hasOwnProperty(k)) {
this[k] = arguments[i][k];
}
}
return this;
}
};
This magical class will share data across inheritance chains, and make your object model simpler. Nothing's a class- everything's an Object! This means understanding minutiae of JavaScript, as to not get yourself stuck in sticky goo, but is well worth it.
This means that when you have:
var Koopa = Template.extend({
hasShell: true,
fatalFlaw: 'shell'
});
var Bowser = Koopa.extend({
fatalFlaw: 'tail'
});
The data stored looks as follows:
+-------------------. +---------------------. +---------.
| Bowser |->| Koopa |->| Template |
+--------------------+ +----------------------+ +----------+
|fatalFlaw => 'tail' | | hasShell => true | | mixin |
`-------------------+ | fatalFlaw => 'shell' | | extend |
`---------------------+ `---------+
This design stems from Father Crockford's prototypal inheritance design. And like Knuth says: "Premature optimization is the root of all evil!"
And... if this seems like an off topic answer- it's intended. You should be asking questions of how your design can serve what your needs are. If you do it well, then everything should fall into place. If you don't think it through enough, you can end up with some nasty side effects and stinky code. Do yourself a favor and come up with a design that eliminates any bit of hesitation you have. Browsers do more complicated and CPU intensive things these days than solve chess!
So, my answer is... don't listen to what people say about efficiency, or what's best (even me!). Do what makes most sense to you, in the design of your library. Right now, you're trying to shove a square peg into a round hole. You need to first decide what your needs are, and everything will come naturally after. It might even surprise you!
Small objects are the preferable choice in my eyes.
From a software engineering perspective, it's much better to have a more modular design with smaller, composable objects rather than one monolithic object. Rather than having one object per game, I would even represent each move as an object.
Recent JS engines like V8 try to build struct-like objects from "class" definitions. This happens behind the scenes for fast property access. Google explains this in the design of V8. Essentially, if you have several small instances of the same class that is more or less static (that is, each instance of the class has the same properties), V8 can optimize for this case.
Don't worry unnecessarily about performance. Design it in a way that seems natural. Obviously you don't want to put all the moves of all games into a single object, that defeats the purpose of object-oriented design. Go with the second option and have an object for each game. It's highly unlikely that you'll run into any problems unless you're loading several thousand games on a single page.
I think this question is much more about software design than performance. But I'm glad youare addressing it now, avoiding refactoring later.
I think you should think each game as an instance (div) of a ChessGame object. This will facilitate much of the rest of development, as making custom libs and functions to deal with game, also you will have the possibility of threading each game (google for "web workers"), increasing the overall performance.

Ajax Architecture - MVC? Other?

Hey all, I'm looking at building an ajax-heavy site, and I'm trying to spend some time upfront thinking through the architecture.
I'm using Code Igniter and jquery. My initial thought process was to figure out how to replicate MVC on the javascript side, but it seems the M and the C don't really have much of a place.
A lot of the JS would be ajax calls BUT I can see it growing beyond that, with plenty of DOM manipulation, as well as exploring the HTML5 clientside database. How should I think about architecting these files? Does it make sense to pursue MVC? Should I go the jquery plugin route somehow? I'm lost as to how to proceed and I'd love some tips. Thanks all!
I've made an MVC style Javascript program. Complete with M and C. Maybe I made a wrong move, but I ended up authoring my own event dispatcher library. I made sure that the different tiers only communicate using a message protocol that can be translated into pure JSON objects (even though I don't actually do that translation step).
So jquery lives primarily in the V part of the MVC architecture. In the M, and C side, I have primarily code which could run in the stand alone CLI version of spidermonkey, or in the serverside rhino implementation of javascript, if necessary. In this way, if requirements change later, I can have my M and C layers run on the serverside, communicating via those json messages to the V side in the browser. It would only require some modifications to my message dispatcher to change this though. In the future, if browsers get some peer to peer style technologies, I could get the different teirs running in different browsers for instance.
However, at the moment, all three tiers run in a single browser. The event dispatcher I authored allows multicast messages, so implementing an undo feature now will be as simple as creating a new object that simply listens to the messages that need to be undone. Autosaving state to the server is a similar maneuver. I'm able to do full detailed debugging and profiling inside the event dispatcher. I'm able to define exactly how the code runs, and how quickly, when, and where, all from that central bit of code.
Of course the main drawback I've encountered is I haven't done a very good job of managing the complexity of the thing. For that, if I had it all to do over, I would study very very carefully the "Functional Reactive" paradigm. There is one existing implementation of that paradigm in javascript called flapjax. I would ensure that the view layer followed that model of execution, if not used specifically the flapjax library. (i'm not sure flapjax itself is such a great execution of the idea, but the idea itself is important).
The other big implementation of functional reactive, is quartz composer, which comes free with apple's developer tools, (which are free with the purchase of any mac). If that is available to you, have a close look at that, and how it works. (it even has a javascript patch so you can prototype your application with a prebuilt view layer)
The main takaway from the functional reactive paradigm, is to make sure that the view doesn't appear to maintain any kind of state except the one you've just given it to display. To put it in more concrete terms, I started out with "Add an object to the screen" "remove an object from the screen" type messages, and I'm now tending more towards "display this list of objects, and I'll let you figure out the most efficient way to get from the current display, to what I now want you to display". This has eliminated a whole host of bugs having to do with sloppily managed state.
This also gets around another problem I've been having with bugs caused by messages arriving in the wrong order. That's a big one to solve, but you can sidestep it by just sending in one big package the final desired state, rather than a sequence of steps to get there.
Anyways, that's my little rant. Let me know if you have any additional questions about my wartime experience.
At the risk of being flamed I would suggest another framework besides JQuery or else you'll risk hitting its performance ceiling. Its ala-mode plugins will also present a bit of a problem in trying to separate you M, V and C.
Dojo is well known for its Data Stores for binding to server-side data with different transport protocols, and its object oriented, lighting fast widget system that can be easily extended and customized. It has a style that helps guide you into clean, well-divisioned code – though it's not strictly MVC. That would require a little extra planning.
Dojo has a steeper learning curve than JQuery though.
More to your question, The AJAX calls and object (or Data Store) that holds and queries this data would be your Model. The widgets and CSS would be your View. And the Controller would basically be your application code that wires it all together.
In order to keep them separate, I'd recommend a loosely-coupled event-driven system. Try to directly access objects as little as possible, keeping them "black boxed" and get data via custom events or pub/sub topics.
JavaScriptMVC (javascriptmvc.com) is an excellent choice for organizing and developing a large scale JS application.
The architecture design is very practical. There are 4 things you will ever do with JavaScript:
Respond to an event
Request Data / Manipulate Services (Ajax)
Add domain specific information to the ajax response.
Update the DOM
JMVC splits these into the Model, View, Controller pattern.
First, and probably the most important advantage, is the Controller. Controllers use event delegation, so instead of attaching events, you simply create rules for your page. They also use the name of the Controller to limit the scope of what the controller works on. This makes your code deterministic, meaning if you see an event happen in a '#todos' element you know there has to be a todos controller.
$.Controller.extend('TodosController',{
'click' : function(el, ev){ ... },
'.delete mouseover': function(el, ev){ ...}
'.drag draginit' : function(el, ev, drag){ ...}
})
Next comes the model. JMVC provides a powerful Class and basic model that lets you quickly organize Ajax functionality (#2) and wrap the data with domain specific functionality (#3). When complete, you can use models from your controller like:
Todo.findAll({after: new Date()}, myCallbackFunction);
Finally, once your todos come back, you have to display them (#4). This is where you use JMVC's view.
'.show click' : function(el, ev){
Todo.findAll({after: new Date()}, this.callback('list'));
},
list : function(todos){
$('#todos').html( this.view(todos));
}
In 'views/todos/list.ejs'
<% for(var i =0; i < this.length; i++){ %>
<label><%= this[i].description %></label>
<%}%>
JMVC provides a lot more than architecture. It helps you in ever part of the development cycle with:
Code generators
Integrated Browser, Selenium, and Rhino Testing
Documentation
Script compression
Error reporting
I think there is definitely a place for "M" and "C" in JavaScript.
Check out AngularJS.
It helps you with your app structure and strict separation between "view" and "logic".
Designed to work well together with other libs, especially jQuery.
Full testing environment (unit, e2e) + dependency injection included, so testing is piece of cake with AngularJS.
There are a few JavaScript MVC frameworks out there, this one has the obvious name:
http://javascriptmvc.com/

Categories