I'm evaluating http://github.com/janl/mustache.js
and I'm thinking about how it will work in general over time with a time. If I just build a giant object, is mustache sufficient to transform it into any form of HTML?
So, my question is. Is there anything that mustache can't do?
(My thought is that it is just tree transformation from JSON to HTML, but I'm not sure how to validate that or gain enough confidence to bet against it)
further clarification
Suppose that all I had was a giant object and then I gave to a mustache template in one iteration; is there anything in HTML that can't be expressed in mustache via its language.
Since Mustache is just a template language in JavaScript, you can do anything you can already do in JavaScript, and JavaScript is Turing complete. So no, there's nothing that you can't do in Mustache; in fact, there's nothing you can do in Mustache that you can't do yourself in JavaScript, it just makes some things more convenient.
When evaluating something like this, instead of determining what it can and can't do, it's more useful to ask "does it make the things that I need to do easy" and "does it make the mistakes I want to avoid hard to make."
For instance, one way to evaluate it would be whether it makes it easy to avoid cross-site scripting (XSS) attacks. According to the documentation "mustache.js does escape all values when using the standard double mustache syntax", so it sounds like it does do a good job of helping prevent these sorts of attacks.
To do a better job of evaluating it, you will need to provide more details on what your requirements are. What are you trying to do? What do you need to integrate with?
edit
Even after your clarification, it's still not exactly clear what you're looking for. Even restricting yourself to expanding a single Mustache template with a single view as input, you can produce any arbitrary string, and thus any arbitrary HTML by just give it that string as input.
If you're asking whether you can perform any arbitrary computation given a template and a view to render, then the answer to that is also yes, because Mustache allows you to call functions in your template, and those functions are written in Javascript, which is Turing complete.
But both of these are trivial answers; you can produce any given output by providing that as input, or you could do any given computation by using a higher-order section. As I said previously, what is possible to do with it is less interesting than what is easy to do with it, and what mistakes are hard to make with it.
I suppose one weakness, that may be the type you're looking for, is that if you need more power than the Mustache system itself provides, you need to pass those functions in as part of the view. Thus, you need to conflate the object that is being displayed with the code that will be used to display it. And if you remove the ability to to call Javascript from the views that are passed into the templates, then you do severely limit what you can do. Given the fact that these objects are known as "views", it seems that it's by design that you mix in the presentation logic with them; this is very different than template systems in which you allow the template to extract values directly from your model objects.
Yes, there are many things you can not do in mustache. Mustache is simpler than some other full-featured template systems ( like the one in Django ). Mustache is a very minimal template system which encourages you to ( through it's lack of features ) to implement "logic-less" templates. This means that some processing that you can do in other template systems must instead be done in code that modifies the data that is sent to the template.
It's not bad template system, it's just a minimal system that aims to be simple and fast.
So, I would say that the answer to this question is: "Yes, there are things you can not do in Mustache ( compared to some other template systems )".
Related
As I understand Entity caries fundamental properties, methods and validations. An User Entity would have name, DoB...email, verified, and what ever deemed 'core' to this project. For instance, one requires phone number while a complete different project would deem phone number as not necessary but mailing address is.
So then, moving out side of Entity layer, we have the Use-Case layer which depends on entity layer. I imagine this use-case as something slightly more flexible than entity. So this layer allows us to write additional logic and calculations while making use of existing entity or multiple entities.
But my first uncertainty lies whether use-case can create derived values(DoB => Age) from entity properties and even persist them in storage or does it need to already exist in User Entity class already ? Secondly, does use-case layer even NEED to use entity, could I have a use-case that literally just sums(a, b) and may or may not persist that in storage anyways? Thirdly, when persisting entity related properties into database, does retrieving them require once again validation, is this redundant and hurting performance?
Finally, the bigger quesetion is, what is use-case ? should use-case mean to be adaptable by being agnostic of where the inputs comes from and what it serves to? does this just mean the inversion dependency removes the responsibility of what framework it ties to. I.e. using express or Koa or plain http wouldn't force a rewrite of core logic . OR does it mean adaptable to something even greater? like whether it serves directly at the terminal-related applications or api request/response-like-applications via web server?
If its the latter, then it's confusing to me, because it has to be agnostic where it gets/goes, yet these outputs resembles the very medium they will deliver to. for instance, designing a restFUL api, a use case may be...
getUserPosts(userId, limit, offset). which will output a format best for web api consumers (that to me is the application logic right? for a specific application). And it's unlikely that I'll reuse the use-case getUserPost for a different requestor (some terminal interface that runs locally, which wants more detailed response), more or less. So to me i see it shines when the times comes to switch between application-specific framework like between express/koa/httprequest/connect for a restapi for the same application or node.js/bun environment to interact with the same terminal. Rather than all mightly one usecase that criss-cross any kind of application(webservice and terminal simultaneously or any other).
If it is allmighty, should use-case be designed with more generalized purpose? Make them take more configurable? like previously i could've add more parameters getUserPosts(userId, limit, offset, sideloadingConfig, expandedConfig, format: 'obj' | 'csv' | 'json' ), I suppose the forethought to anticipate different usage and scaling requires experience - unless this is where the open-close principle shines to make it ready to be expandable? Or is it just easier to make a separate use-case like getUserPostsWebServices and getUserPostsForLocal , getPremiumUsersPostsForWebServices - this makes sense to me, because now each use-case has its own constraints, it is not possible for WebServieces to reach anymore data fetch/manipulation than PostsForLocal or getPremiumUsersPostsForWebServices offers. and our reusability of WebServices does not tie to any webserver framework. I suppose this is where I would draw the line for use-case, but I'm inexperienced, and I don't know the answer to this.
I know this has been a regurgitation of my understanding rather than a concrete question, but it still points to the quesiton of what the boundary and defintion of use-case is in clean architecture. Thanks for reading, would anyone chime to clarify anything I said wrong?
But my first uncertainty lies whether use-case can create derived values(DoB => Age) from entity properties and even persist them in storage or does it need to already exist in User Entity class already ?
The age of a user is directly derived from the date of birth. I think the way it is calculated is the same between different applications. Thus it is an application agnostic logic and should be placed in the entity.
Defining a User.getAge method does not mean that it must be persisted. The entities in the clean architecture are business object that encapsulate application agnostic business rules.
The properties that are persisted is decided in the repository. Usually you only persist the basic properties, not derived. But if you need to query entities by derived properties they can be persisted too.
Persisting time dependent properties is a bit tricky, since they change as time goes by. E.g. if you persist a user's age and it is 17 at the time you persist it, it might be 18 a few days or eveh hours later. If you have a use case that searches for all users that are 18 to send them an email, you will not find all. Time dependent properties need a kind of heart beat use case that is triggered by a scheduler and just loads (streams) all entities and just persists them again. The repository will then persist the actual value of age and it can be found by other queries.
Secondly, does use-case layer even NEED to use entity, could I have a use-case that literally just sums(a, b) and may or may not persist that in storage anyways?
The use case layer usually uses entities. If you use case would be as simple as sum two numbers it must not use entities, but I guess this is a rare case.
Even very small use cases like sums(a, b) can require the use of entities, if there are rules on a and b. This can be very simple rules like a and b must be positive integer values. But even if there are no rules it can make sense to create entities, because if a and be are custom entities you can give them a name to emphasize that they belong to a critial business concept.
Thirdly, when persisting entity related properties into database, does retrieving them require once again validation, is this redundant and hurting performance?
Usually your application is the only client of the database. If so then your application ensures that only valid entities are stored to the database. Thus it is usually not required to validate them again.
Valid can be context dependent, e.g. if you have an entity named PostDraft it should be clear that a draft doesn't have the same validation rules then a PublishedPost.
Finally a note to the performance concerns. The first rule is measure don't guess. Write a simple test that creates, e.g. 1.000.000 entities, and validates them. Usually a database query and/or the network traffic, or I/O in common, are performance issues and not in memory computation. Of cource you can write code that uses weird loops that mess up performance, but often this is not the case.
Finally, the bigger quesetion is, what is use-case ? should use-case mean to be adaptable by being agnostic of where the inputs comes from and what it serves to? does this just mean the inversion dependency removes the responsibility of what framework it ties to. I.e. using express or Koa or plain http wouldn't force a rewrite of core logic . OR does it mean adaptable to something even greater? like whether it serves directly at the terminal-related applications or api request/response-like-applications via web server?
A use case is is an application dependent business logic. There are different reasons why the clean architecture (and also others like the hexagonal) make them independent of the I/O mechanism. One is that it is independent of frameworks. This makes them easy to test. If a use case would depend on an http controller or better said you implemented the use case in an http controller, e.g. a rest controller, it means that you need to start up an http server, open a socket, write the http request, read the http response, extract the data you need to test it. Even there are frameworks and tools that make such an test easy, these tools must finally start a server and this takes time. Tests that are slow are not executed often, are they? And tests are the basis for refactoring. If you don't have tests or the tests ran slow you do not execute them. If you do not execute them you do not refactor. So the code must rot.
In my opinion the testability is most import and decoupling use cases from any details, like uncle bob names the outer layers, increases the testability of use cases. Use cases are the heart of an application. That's why they should be easy testable and be protected from any dependency to details so that they do not need to be touched in case of a detail change.
If it is allmighty, should use-case be designed with more generalized purpose? Make them take more configurable? like previously i could've add more parameters getUserPosts(userId, limit, offset, sideloadingConfig, expandedConfig, format: 'obj' | 'csv' | 'json' )
I don't think so. Especially the sideloadingConfig, format like json or csv are not parameters for a use case. These parameters belong to a specific kind of frontend or better said to a specific kind of controllers. A use-case provides the raw business data. It is the responsibility of a controller or better a presenter to format them.
** Revised **
I made this basic example which I believe is evicence that JavaScript may be useful as it's own template engine:
http://jsfiddle.net/julienetienne/u6akrx7j/
<script>talk[0].text('Hello World!');</script>
It's just a simple example but as you can see there are many possibilities eg.
It doesn't necessarily have to detect the tag nodes in that manner, it could do it by class, id. It is also possible to obtain the script node of the function,
you could simply print variables like p('Page Title');
A self closing list of elements could be similar to e.g. li('', menu);
And you could clearly build up other complex data sets as you can with any other common template engine.
Before this edit I made the mistake of comparing it to PHP. I'm actually considering it more of an alternative to e.g. handlebars, underscore, moustache, dust and similar.
But before I get to excited I would really like to know if there are any issues in regards to using in this way. (I'm not concerned with novice best practices).
The benefits of an organic template system seems quite apparent. The biggest advantage is that there is no syntax to learn and it's cleaner than seeing %{{foobar}}% like markings.
Considering my example is just a tiny minimalistic concept, please tell me the drawbacks of a system like this compared to common template engines.
Thanks
It looks like it's a PHP-style templating system, but it is apparently not:
In the fiddle the script parts could be written anywhere in the page.
The part that defines the position of the texts to be inserted into the DOM is defined by the line var talk = document.getElementsByTagName('div'); and the structure of the HTML.
If you'd use more div than the ones already there the texts will appear at a totally different (and supposedly wrong) position.
The disavantage will thus be, that you can't use the system independently of the underlying markup. That makes it different from a use case in PHP, where you can echo the variables by defining their position in the markup.
Have a look at Angular or similar front end frameworks, if you are looking for a way how to implement a templating system.
Is it a bad practice to have backbone views which do not depend on any kind of templating system?
Of course, this does not mean that any kind of DOM code will be generated by hand, using hardcoded strings within the views. No no no. I can of course cache the basic layout for each view in DOM elements, set to display:none. However, I want to reduce any kind of value setting within the templates themselves. I'd rather do that using jquery, or any other kind of DOM modifier form within the view itself. This way I save myself the constant discrepancies and the countless hours of efforts that I've exposed my apps and myself to, using Mustache, Handelbars, the Underscore templating system, etc. Having all the view logic in one place makes everything much cleaner, at least in my view. It gives a lot of benefits, such as proper partial rendering, value binding, etc, that I'd need tons of days to spend on if I wanted to implement them with mustache or something.
The only problem that I see might occur is whether the constant checks that I'd do with jQuery will be performing fast enough, but I guess that it shouldn't be such a problem at the end.
What do you think? Good? Bad? Practical?
IMHO not using a template engine is not, per-se, a bad design decision.
Template engines are meant to produce cleaner and more maintainable code, if you think not using them produces cleaner and more maintainable code them you can run without them.
I my opinion is better to use template engines but this is just a matter of taste. I also combine the templates with manual DOM modifications through this.$el.find(".my-element").html( "new value" ); for partial updates.
Templates are good. They save you a lot of time coding and allow you to loop over elements and create multiple elements with very little code.
That said if the render is called too much or you destroy and recreate DOM elements/templates, it costs a lot in performance.
I have a PerfView for backbone that can display 1,000,000 models in a scrollview at 120FPS in chrome. The trick is I only render the template once with all the possible nodes I need then change the content in the DOM when the model changes. Also using an object pool for the DOM elements will let you reuse them. Check out the modelMap property on line 237 (https://github.com/puppybits/BackboneJS-PerfView/blob/master/index.html) on how to semi-automate updating an element so you don't overuse a template.
After reading this Micro templates are dead article. I've become curious:
Whether Using the DOM on the server results in cleaner more maintainable code then templating.
Whether it's more efficient to use jsdom instead of a templating engine.
How to factor jsdom into the View of a standard MVC setup.
And generally in what situations would it be better to use a server-side DOM abstraction, like jsdom rather then a templating engine, like EJS or jade.
The question is specific to node.js and other SSJS
Well, I actually needed JSDom for a small project I built over the weekend in node.js. So, on my server, I had to accept a URL to fetch, grab all of the HTML from the given URL, parse it, and display images to the user so that the user could select a thumbnail from that URL. (Kind of like when you drop a link into the Facebook input box) So, I used a module called Request which allows me to fetch HTML on the server-side. However, when that HTML reached my program, I had no way to traverse it like you do with client-side javascript. Because there was no actual DOM, I couldn't say document.getElementById('someId'). Therefore, JSDom came in handy by giving me a "makeshift" DOM that allowed me to traverse the returned HTML. Now, even though I was still on the server side, JSDOM created a window object very similar to the window object in the browser, and created a DOM out of the returned HTML. Now, even on the server, I was able to get all images by calling window.$('img'). I could target and parse the elements like normal. So, this is just one problem where JSDom turned out to be the solution, but it worked amazingly well. Hope this helps some!
Its a nice abstraction that matches a client side engineers take on how the dom is built and modified. In that respect it is 'cleaner' because there is one mental model. Its also nice because we don't have to mix a kluge of disparate syntaxes from a templating language on top of otherwise clean declarative markup as is the case with even the 'stupidest' templating system, such as mustache.
I would NOT say its more efficient to use jsdom for templating. Go take a gander at google wrt to 'memory leaks with jsdom' for instance. jsdom is rad, and is super useful for tasks like weekend projects for crawling sites, doing non-server related tasks, but I think its slow as shit from a high performance web server perspective.
There are a billion ways to factor this. No method has emerged as a 'standard' way. One way that I've seen is to send down an empty 'template', i.e. a block of html that represents a model in some way, and then use that to bootstrap building your end view from a model. From that article, for example:
<li class="contact" id="contact-template">
<span class="name"></span>
<p class="title"></p>
</li>
This is the 'view' in the classic respect. In the typical web application, it might look something more like:
<li class="contact">
<span class="name"><?= $name ?></span>
<p class="title"><?= $title ?></p>
</li>
To use mvc, one sets up a controller that is vaguely aware of the semantics of the above view and the model it represents. This view is parsed into the/a DOM and accessed via your favorite selector engine. Each time the model this represents changes, you might use change events or callbacks to update the view. For instance:
Lets imagine that 'model' fires a 'change' event every time a property changes.
controller = new Controller({
view: $('#contact-template').clone(), // Assume jquery or whatever
model: aContact
});
// Assume some initialization that sets the view up for the first time
// and appends it to the appropriate place. A la:
// this.view.find('.name').text(model.name);
// this.view.find('.title').text(model.title);
// this.view.appendTo('#contacts')
controller.on('model.name.change', function(name){
this.view.find('.name').text(name);
});
These are what systems like Weld and Backbone.js do for you. They all have varying degrees of assumptions about where this work is taking place (server-side, client-side), what framework you're using (jquery, mootools, etc), and how your changes are being distributed (REST, socket.io, etc).
Edit
Some really useful things you can do with jsdom revolve around integration testing and spidering:
https://github.com/mikeal/spider - general purpose web spider that makes use of node's event based processing and gives you jsdom/jquery to help you easily access the DOM in a programatic way
https://github.com/assaf/zombie - headless browser testing using jsdom/jquery for integration tests
https://github.com/LearnBoost/tobi - similar headless browser testing
Personally, I'd like to see a project that took tobi's approach, but mapped it on top of something like https://github.com/LearnBoost/soda such that we can do cloud based selenium testing without selenese (since imo, it sucks).
A few come to mind:
Sharing views/controllers between server and browser
Data mining / crawling / processing
Transformation for fragments of HTML used in AJAX/realtime stuff
Absolute separation of logic and content by avoiding template tags
And to answer your questions:
maybe. A lot of things affect code quality, but it's a step in the right direction
nope, templating engines will always be faster, since they can pre-compile templates
this probably warrants a new question?
point 2 of your question can be answered by this templating testcase:
go http://jsperf.com/dom-vs-innerhtml-based-templating/300
click the button "Run tests".
be patient, it compares weld vs. a lot of other template engines and gives you the current benchmarks ...
I recently ran into a situation where it made sense (at first at least) to have my server-side code not only "print" html, but also some of the javascript on the page (basically making the dynamic browser code dynamic itself). I'm just wondering if there is sometimes a valid need for this or if this can usually be avoided...basically from a theoretical perspective.
Update:
Just to give an idea of what I'm doing. I have a js function which needs to have several (the number is determined by a server-side variable) statements which are generating DOM elements using jQuery. These elements are then being added to the page. So. I am using a server-side loop to loop over the number of elements in my object and inside of this loop (which also happens to be inside of a js function) I am aggregating all of these statements.
Oh and these dom elements are being retreived from an xhr (so the number of xhr requests is itself a server-side dependency) and then processed using jQuery..which helps explain why im not just printing the static html to begin with. Sounds kind of ridiculous I'm sure, but its a complicated UI..still I'm open to criticism.
I can smell some code smell here... if you're having to generate code, it probably means one of:
you are generating code that does different things in different situations
you are generating same kind of functionality, only on different things.
In case 1, your code is trying to do too much. Break your responsibilities into smaller pieces.
In case 2, your code is not generic enough.
Yes there is sometimes a need and it is done often.
Here is a simple usage in an Asp.net like syntax
function SayHi( ){
alert( "Hello <%= UserName %>");
}
Just a suggestion. If your server-side is generating HTML/Javascript, then you're letting view-side logic creep into your server-side. This violates separation of concern if you're following an MVC-style architecture. Why not use a taglib (or something of that nature) or send JSON from the server-side?
One usefull feature is to obfuscate your javascript on the fly.. When combined with a caching mechanism it might actually be useful. But in my opinion javascript generation should be avoided and all serverside variables should be handed to the script from the templates (on class or function init) or retrieved using XMLHTTP requests.
If your application generates javascript code only for data (eq. you want to use data from sql in js), the better practice is to use JSON.
Yes, sometimes for some specific task it may be easier to generate javascript on the fly in a bit complex way, like in rails rjs templates or jsonp responses.
However, with javascript libraries and approaches getting better and better, the need for this is becoming less, a simple example you may have needed to decide on a page whether to loop some elements and apply something or hide another single element, with jquery and other libraries, or even one's own function that handles some complex situation, you can achieve this with a simple condition and a call.