WebApp that communicates using only json objects? - javascript

Hey everyone, I've been thinking about how the majority of web apps works at the moment. If as an example the backend is written in java/php/python what you probably see is that the backend is "echoing / printing" the html ready to the browser, right.
For web apps that works using almost exclusively ajax, is there a reason to not simply communicate without html, as an example just by using JSON objects passing back and fourth between the server and client, and instead of "printing or echoing" html in our script/app backend we simply echo the json string, ajax fetches it and converts the JSON string to an object with all of our attributes/arrays and so on.
Surely this way we have less characters to send, no html tags and so on, and on the client side we simply use frameworks such as jQuery etc and create/format our html there instead of printing and echoing the html in the server scripts?
Perhaps people already do this but I have not really seen a lot of apps work this way?
The reason is that I want todo this is because I would like to separate the layer of presentation and logic more than it currently is, so instead of "echoing" html in my java/php I just "echo" json objects, and javascript takes care of the whole presentation layer, is there something fundamentally wrong with this, what are your opinions?
Thanks again Stackoverflow.

There are quite a few apps that work this way (simply communicating via AJAX using JSON objects rather than sending markup).
I've worked on a few and it has its advantages.
In some cases though (like when working with large result sets) it makes more sense to render the markup on the server side and send it to the browser. That way, you're not relying on JavaScript/DOM Manipulation to create a large document (which, depending on the browser, would perform poorly).

This is a very sensible approach, and is actually used in some of our applications in production.
The main weakness of the approach is that it increases the load on the browser resource-wise and therefore might - in light of browsers often already-sluggish JS performance - lead to worse user experience unless the presentation layer mechanics is very well tuned.

Now a days many webapps use this approach like gmail and other big apps even facebook this
the main advantage of this approach is user dont need to refresh all the pages and he gets what we want to show him or what he desired.
but we have to make a both version like ajax and normal page refresh what if the user refresh the page.
we can use jquery template which generates html and also google's closer which is used by a gmail and other google products.

Related

Engineering a better solution than getting a whole HTML document with AJAX

I have a webpage that I am working on for a personal project to learn about javascript and web development. The page consists of a menu bar and some content. My idea is, when a link is clicked, to change the content of the page via AJAX, such that I:
fetch some new content from a different page,
swap out the old content with the new,
animate the content change with some pretty javascript visual effects.
I figure that this is a little more efficient than getting a whole document with a standard HTTP GET request, since the browser won't have to fetch the style sheets and scripts sourced in the <head> tag of the document. I should also add that I am fetching content solely from documents that are served by my web app that I have created and whose content I am fully aware of.
Anyway, I came across this answer on an SO question recently, and it got me wondering about the ideal way to engineer a solution that fits the requirements I have given for the web page. The way I see it, there are two solutions, neither of which seem ideal:
Configure the back-end (server) from which I am fetching such that it will return content and not the entire page if asked for only content, and then load that content in with AJAX (my current solution), or
Get the entire document with AJAX and then use a script to extract the content and load it into the page.
It seems to me that neither solution is not quite right. For 1, it seems that I am splitting logic across two different places: The server has to be configured to serve content if asked for content and the the javascript has to know how to ask for content. For 2, it seems that this is an inappropriate use of AJAX (according to the previously mentioned SO answer), given that I am asking for a whole page and then parsing it, when AJAX is meant to fetch small bits and pieces of information rather than whole documents.
So I am asking: which of these two solutions is better from an engineering perspective? Is there another solution which would be better than either of these two options?
animate the content change with some pretty javascript visual effects.
Please don't. Anyway, you seem to be looking for a JS MVC framework like Knockout.
Using such a framework, you can let the server return models, represented in JSON or XML, which a little piece of JS transforms into HTML, using various ways of templating and annotations.
So instead of doing the model-to-HTML translation serverside and send a chunk of HTML to the browser, you just return a list of business objects (say, Addresses) in a way the browser (or rather JS) understands and let Knockout bind that to a grid view, input elements and so on.

Load HTML with Ajax vs. load only data with Ajax and build the html with Javascript

I'm working on a project which needs to load via AJAX some HTML information on the page.But I'm not quite sure how to load that information.
I don't know which method is better for me:
1) Load the entire HTML with AJAX and then just append it to the page
or
2) Load only the data with AJAX and then build the HTML using Javascript(+JQuery)
The one I tend to use is the first method because it's the easiest one but also it is more expensive regarding the memory (the biggest file I have to load has about 7kb which is not too much)
The second one which is the hardest, involves a huge Javascript (Jquery) code to build the HTML(I have also to load the attributes for the element).And because I have lots of different HTML code to load I have to make lots of conditions (e.g. one for a button, one for a title, one for a textarea etc) and also I have to create variables that containes that HTML.
My question is what method is the best to use in my case?
I would always stick to the solution where I cleanly divide data/logic and view. This would probably the case in the first alternative. Making changes to HTML generated with JavaScript is quite hard.
I'd suggest another option: Use client side templates. Load the template and the data with an ajax call and then fill out the template using javascript. There are some libraries out there for this scenario.
I can imagine a couple of developers working: A full-time back-end developer and a full-time front-end developer.
The BE developer has to send some data for the FE developer to display it right. Thinking of programming as easy as possible, he chooses the first method described by the OP. Everyone is happy.
A few months later, the presentation of this very data needs to be updated. The manager quickly calls the Front-End developer, which says:
"Uhhh... No can do. The entire data already comes formatted directly from the server."
Oh noes! Would this have happened if the data came only as RAW DATA from the server? I wonder =)
the second method:
Load only the data with AJAX and then build the HTML using Javascript(+JQuery)
is much appropriate, one main advantages of these method is your ajax response will become faster.
Also, it is more logical to separate the logic from design.
Note: always the best solution is depend on your specific case.

UI Development - JavaScript vs Ajax

This is a design question. I find myself going back and forth between two UI design styles.
The UI on a prototype I developed recently relied heavily on loading the elements of the UI as partial views via AJAX. I did not like that approach because I had to do a lot of requests to the server while my page was loading. However, the plus of this design was that I could easily edit the partial view templates, so the code was easier to manage.
In the next iteration I decided to package all the information at once and then use JavaScript to generate partial views and plug this information into it (and only use Ajax when on-demand up-to-date information was actually needed). My page loads faster, but I find myself generating a lot of HTML snippets in JavaScript, which is getting harder to manage.
As I see it, using Ajax you get:
Easier to maintain (+)
Longer UI response times (-)
And with JavaScript only, you get
Faster UI response (+)
Easier to handle server-side errors (+)
Harder to maintain (-)
So, There are a few things on which I would like to hear your comments:
I do not like using Ajax unless I don't have a need for actual on-demand data. Am I wrong?
Are there frameworks/libraries that would make managing HTML-generating JavaScript code easier?
Are there any other pros/cons of the two approaches that I have missed?
Thanks,
There are templating libraries for JavaScript, if you want to get that involved. In general I would avoid manually sticking together HTML strings from JS unless there's a particular need to (eg in some cases for performance when dealing with very large tables).
HTML-hacking is difficult to read and prone to HTML-injection security holes when you don't get escaping right.
Start instead with the DOM methods, and use DOM-like content manipulation libraries to make it easier. For example, if using jQuery, do this:
$('<a>', {href: somelink, text: sometext})
and not this:
$(''+sometext+'') // insecure mess
There doesn't need to be a big difference between fetching data through XMLHttpRequest vs including it in the HTML document itself. You can direct a bunch of JSON data to the same function that will build page parts whether it was just fetched by XMLHttpRequest in an update operation, or included in function calls at document load time.
When including data in the page you will generally need to include a timestamp they were generated in any case, so that if the browser goes back to the page a while later without reloading it, the browser can detect that the information is now out-of-date, and cue up an XMLHttpRequest to update it.
The usual question when you're creating your page content from data with client-side JavaScript is, are you going to fill in the initial static HTML version from the server too? If so, you're going to be duplicating a lot of your content generation work in both client-side JS and the server-side language(*). If not, then you're making the content invisible to non-JS user agents which includes search engines. Whether or not that matters to you typically depends on what the app is doing (ie does it need to be searchable and accessible?)
(*: unless you can use server-side JavaScript from something like node.js and re-use your content generation code. This is still a somewhat rare approach.)
Why not look at integrating require.js into your workflow? I'm not entirely sure how it would work with templates, but included in their pipeline is the ability to package all required script files into a single .js to be served by a single server request/response.
I have no personal experience about it, but Closure looks promising. The thing about its templates being usable on both server and client side might be of interest to you. Here is what was said about using it in Google+:
The cool thing about Closure templates is they can be compiled into both Java and JavaScript. So we use Java server-side to turn the templates into HTML, but we can also do the same in JavaScript client-side for dynamic rendering. For instance, if you type in a profile page URL directly, we'll render it server-side, but if you go to the stream say and navigate to someone's profile page, we do it with AJAX and render it client-side using the same exact template.
When working with remote data after a page is loaded, for small datasets I prefer returning data only and adding to the UI with templates.
For large datasets, I recommend using your partial views to render the html on the server to reduce overhead in the client as #bobince mentioned.
For client-side state tracking, check out Knockout at http://www.knockoutjs.com. It uses an MVVM approach with data models bound to UI elements and makes it very simple to send the data back to the server via AJAX. It works with the jquery.tmpl template library out of the box or you can integrate another library of preference with a little more effort.
As far as managing templates, it's easy enough to store common templates in an object, either on the server to be retrieved with your remote data, or in a javascript object on the client.

Opinion Regarding Filtering of Content using JS

I'm working on a project and there is some battle between how some JS filtering should be implemented and I would like to ask you guys some input on this.
Today we have this site that displays a long list of repeated entries of data and some JS filtering would be nice for the users to navigate through. The usual stuff: keyword, order, date, price, etc. The question is not the use of JS, which is obvious, but the origin of the data. One person defends that the HTML itself should be used and that the JS should parse through it making the user's desired filtering. Another person defends that we should use a JSON generated in the server, and that JSON should be the data's origin.
What you guys think on this? What are the pros and cons?
As a final request, I would like you to be the most informative as possible since your answers will be used and referenced for all us in the company. (Yes, that is how we trust you!:)
The right action is matter of taste and system architecture as well as utility.
I would go with dynamically generated pages with JS and JSON -- These days I think you can safely assume that most browsers has Javascript enabled -- however you may need to make provisions for crawler (GoogleBot, Bing, Ask etc) as they may not fully execute all JS and hence may not index the page if you do figure out some kind of exception for supporting those.
Using JS+JSON also means that you make your code work so that support for mobile diveces is done client side, without the webserver having to create anything special.
Doing DOM manipulation as the alternative would not be my best friend, as the logic of the page control and layout is split-up in two places -- partly in the View controller on the webserver, and partly in the JavaScript -- it is in my opinion better to have it in one place and have the view controller only generate JSON and server the root pages etc.
However this is a matter of taste, and im not sure that I would be able to say that there is one correct and best solution.
I think it's a lot cleaner if the data is delivered in JSON and then the presentation HTML or view of that data is generated from that JSON with javascript.
This fits the more classic style of keeping core data structures separate from views. In this manner you can generate all types of views without having to constantly munge/revise the way you store, access and manipulate the data. You can even build classes and methods to develop a clean interface on your data that is entirely independent of how that data is displayed.
The only issue I see with that is if the browser doesn't support javascript and that browser is a desired viewer. In that case, you have to include a default HTML version from the server that will obviously not be manipulated and the JSON will be ignored.
The middle ground is that you include both JSON and the "default", initial HTML view of that data in rendered HTML. The view comes up quickly and non-JS browsers can see something useful. But, then any future manipulation of the view (sorting, for example) uses the JSON data and generates a new clean view from the JSON data. No data is then ever "parsed" from the HTML view.
In larger projects, this also can facilitate the separation of presentation from data manipulation so different people may work on creating HTML views vs. manipulate the data (like sorting).
I would make the multiple ajax calls to the server and have it return the sorted/filtered data. If you server backend is fast than it won't be very taxing and you could even cache the data between requests.
If you only have 50-100 items than it would be reasonable to send it all to the client and have javascript sort and filter it.
Some considerations to help make the decision
Is the information sensitive and unique? (this voids and benefit to caching in my first point)
What is the most common request that will happen and are you optimizing for that?
How much data is there? (tens of rows, hundreds, thousands, millions)?
Does you site have to work with JavaScript turned off? (supporting older browsers?)
Is your development team more comfortable doing this in the front-end or back-end?
The answer is that it depends on your situation.

Why don't I just build the whole web app in Javascript and Javascript HTML Templates?

I'm getting to the point on an app where I need to start caching things, and it got me thinking...
In some parts of the app, I render table rows (jqGrid, slickgrid, etc.) or fancy div rows (like in the New Twitter) by grabbing pure JSON and running it through something like Mustache, jquery.tmpl, etc.
In other parts of the app, I just render the info in pure HTML (server-side HAML templates), and if there's searching/paginating, I just go to a new URL and load a new HTML page.
Now the problem is in caching and maintainability.
On one hand I'm thinking, if everything was built using Javascript HTML Templates, then my app would serve just an HTML layout/shell, and a bunch of JSON. If you look at the Facebook and Twitter HTML source, that's basically what they're doing (95% json/javascript, 5% html). This would make it so my app only needed to cache JSON (pages, actions, and/or records). Which means you'd hit the cache no matter if you were some remote api developer accessing a JSON api, or the strait web app. That is, I don't need 2 caches, one for the JSON, one for the HTML. That seems like it'd cut my cache store down in half, and streamline things a little bit.
On the other hand, I'm thinking, from what I've seen/experienced, generating static HTML server-side, and caching that, seems to be much better performance wise cross-browser; you get the graphics instantly and don't have to wait that split-second for javascript to render it. StackOverflow seems to do everything in plain HTML, so does Google, and you can tell... everything appears at once. Notice how though on twitter.com, the page is blank for .5-1 seconds, and the page chunks in: the javascript has to render the json. The downside with this is that, for anything dynamic (like endless scrolling, or grids), I'd have to create javascript templates anyway... so now I have server-side HAML templates, client-side javascript templates, and a lot more to cache.
My question is, is there any consensus on how to approach this? What are the benefits and drawbacks from your experience of mixing the two versus going 100% with one over the other?
Update:
Some reasons that factor into why I haven't yet made the decision to go with 100% javascript templating are:
Performance. Haven't formally tested, but from what I've seen, raw html renders faster and more fluidly than javascript-generated html cross-browser. Plus, I'm not sure how mobile devices handle dynamic html performance-wise.
Testing. I have a lot of integration tests that work well with static HTML, so switching to javascript-only would require 1) more focused pure-javascript testing (jasmine), and 2) integrating javascript into capybara integration tests. This is just a matter of time and work, but it's probably significant.
Maintenance. Getting rid of HAML. I love HAML, it's so easy to write, it prints pretty HTML... It makes code clean, it makes maintenance easy. Going with javascript, there's nothing as concise.
SEO. I know google handles the ajax /#!/path, but haven't grasped how this will affect other search engines and how older browsers handle it. Seems like it'd require a significant setup.
Persistant private data storage.
You need a server to store data with various levels of public/private access. You also need a server for secure closed source information. You need a server to do heavy lifting that you don't want to do on the client. Complex data querying is best left upto your database engine. Indexing and searching is not yet optimised for javascript.
Also you have the issues of older browsers being far slower. If your not running FF4/Chrome or IE9 then there is a big difference between data manipulation and page construction on the client and the server.
I myself am going to be trying to build a web application made entirely using a MVC framework and template's but still using the server to connect to secure and optimised database.
But in general the application can indeed be build entirely in javascript and using templates. The various constructs and javascript engines have advanced enough to do this. There are enough popular frameworks out there to do this. The Pure javascript web applications are no longer experiments and prototypes.
Oh, and if were recommending frameworks for this, then take a look at backbone.js.
Security
Let's not forget that we do not trust the client. We need serverside validation. JavaScript is interpreted, dynamic and can be manipulated at run time. We never trust client input.
I used to mix these two approaches but then switched to client side rendering entirely because it was really hard to handle heavy JavaScript properly the otherwise. As a complete solution can recommend the approach of the JavaScriptMVC framework.
It has a view rendering engine called EJS which can compress your views into plain JavaScript for a production build of your application. That makes it extremely fast (all your HTML gets preloaded with your single compressed JavaScript file so it is rendered as soon as you receive your models JSON data). I personally couldn't notice a performance difference between EJS rendering and transferring plain HTML.
Then for your API, following the REST principles, caching is one of the key constraints. So if your application supports HTTP caching properly for JSON data and using your compressed client side templates you might even see a performance improvement.
I could be way off here, but...
Have you ever looked at CouchDB? (I have no affiliation w/ them BTW) I could be way wrong, but your situation sounds like it may be a perfect fit for the use of the Apache CouchDB I haven't really used it yet myself but I took a good look at it a short while back and it is a very interesting database.
It is a document based database that uses a REST api for connections (very versatile and easy to use). It is also very JSON centric, very fast and a tiny footprint; they say it can reside on phones and other embedded uses too but at the same time is supposed to be extremely scalable (upwards that is). If your a big JS user (which you sound like you are) then you may be right at home with it.
I was just thinking that it may come in handy in any number of ways that have been proposed here and thought I'd chime in just to give you an idea for storage options :)

Categories