How to organize the code for a javascript UI widget? - javascript

The web app I'm working on has several fairly complex and stateful pieces of client-side UI. I'm trying to keep things sane by organizing them into composable javascript widgets. Some of the requirements are:
1) Some of the data required to initialize the widget needs to come from the server
2) Some of the data comes from the client (based on which of a list of items is selected)
3) The state of some sections of the widget needs to be remembered when it's closed and reopened
4) The state of other sections needs to be forgotten and reset to the original setting
5) The page has to fully render very quickly, so AJAX calls need to be minimized
So, my question is, how do you organize the code for this widget? What I have currently involves sending down the HTML for the widget with some data (i.e. #1) filled in on the server side (to avoid sending down the data via an extra AJAX call). I then have a bunch of jquery code that fills in the client side stuff in an ad-hoc fashion. And then more code to reset everything back to the desired state on the second invocation. Surely there's a more declarative way to accomplish this?

I think you would be very happy using Javascript MVC. It makes it much easier to manage the state of components, and makes it pretty easy to manage data on the client side instead of making repeated ajax calls. Do the tutorial and I think you will be quite happy.
EDIT
My original answer doesn't address the page rendering speed.
If you are really worried about page rendering speed, the best solution is to render the document. The next best solution is to not worry about the problem until the final stages of development. It is pretty easy to optimize data loading behavior, and you will come up with much better solution after you have solidified the rest of your application design.
I have employed various techniques to improve render performance, such as cachable ajax, or injecting script blocks with data when the page is generated on the server. Both of these approaches have given me significant performance boosts, and both were easily applied at later stages in development.

If you're going to roll your own widget, and you're going to persist the state of the widget, then assign each widget a GUID/UUID and use that as part of the "container name" of the widget (I take the last 13 digits for mine to ensure some sort of randomness. I don't maintain that it won't ever break.) You can do that part easily enough during object creation, either serverside or clientside. You can persist the GUID/UUID as the ID of the widget as well, so that isn't likely to ever change.
All this was given as part of "how do I compose the widget" for modularity sake, especially during javascript operation (the italicized section is what I thought was meant... further conversation leads me to believe that I may have been wrong. leaving it for posterity now)
Otherwise, see the comments above.

Related

PHP: Best way forward [serialisation, objects, Redis]

I have been developing a PHP application for quite a while. The (basic) idea is as follows; users can build web-pages using blocks. These blocks can contain images, text etc. Each of these blocks have their own options. These blocks are defined in Domain Driven Design through PHP.
I've build the application to use a php-based Controller that handles the requests from a jQuery/Javascript front. Each time the user edits an option its send to this Controller which unserialises a collection of blocks (php-objects) from Redis and/or the php-session and sets the the attributes of the blocks that are edited or adds/removes one of the blocks. This is to enforce the Domain logic.
Which was fine will developing for myself. I never kept race conditions and such in mind. While moving forward with the product I notice that people lose data. I'll explain what happens;
User edits an option of a block
press save
A request is made to the controller which,
unserialises the collection and
sets the blocks based on their uuid
puts the blocks back in the collection and
serialises the collection again.
There are scenario's where 2 concurrent request might be created which will override the edits of 1 of both requests.
I know I need to rewrite this part of the application. The question is what is the best approach. I could;
Implement some javascript library which will take me a lot of work because it would require me to rewrite that entire part of the application. Also I do not have a lot of experience implementing javascript based solutions. But I do not might stepping into something new. I do want to javascript testing to prevent future problems from occurring and enable cross-browser testing
Apply Redis / Session locking to only enable the controller to process a single request and prevent concurrent requests from overwriting the data set in the previous request. This will lower the chance of concurrent request and data loss, but not fully. People with real slow internet connection might get their connections losing when they might produce a lot of concurrent requests.
I'm curious what other approaches I might be missing, or if one of the two I mentioned above will suffice.
As far as I understand your problem, what you may want to implement is optimistic locking.
A simple way to implement it, is to version your aggregate.
Every time someone edits your object, increment its version.
When you POST your edited blocks, you send back the version on which you are trying to apply your changes.
then, when getting back your object from your persistence storage, you compare the version and ensure you are actually working on an up-to-date object.
If it is, save it, if it is not, reject the modification, notify the user, and reload the object, and take the appropriate action (it depends on your needs).

Should I render this template using JavaScript or the server?

I'm rendering a news feed.
I'm planning to use Backbone.js for my javascript stuff because I'm sick of doing manual DOM binds with JQuery.
So right now I'm looking at 2 options.
When the user loads the page, the "news feed" container is blank. But the page triggers a javascript which renders the items of the news feed onto the screen. This would tie into Backbone's models and collections, etc.
When the user loads the page, the "news feed" is rendered by the server. Even if javascript was turned off, the items would still show because the server rendered it via a templating engine.
I want to use Backbone.js to keep my javascript clean. So, I should pick #1, right?? But #1 is much more complicated than #2.
By the way, the reason I'm asking this question is because I don't want to use the routing feature of Backbone.js. I would load each page individually, and use Backbone for the individual items of the page. In other words, I'm using Backbone.js halfway.
If I were to use the routing feature of Backbone.js, then the obvious answer would be #1, right? But I'm afraid it would take too much time to build the route system, and time should be balanced into my equation as well.
I'm sorry if this question is confusing: I just want to know the best practice of using Backbone.js and saving time as well.
There are advantages and disadvantages to both, so I would say this: choose the option that is best for you, according to your requirements.
I don't know Backbone.js, so I'm going to keep my answer to client- versus server-side rendering.
Client-side Rendering
This approach allows you to render your structure quickly on the server-side, then let the user's JavaScript pick up the actual content.
Pros:
Quicker perceived user experience: if there's enough static content on the initial render, then the user gets their page back (or at least the beginning of it) quicker and they won't be bothered about the dynamic content, because in all likelihood that will render reasonably quickly too.
Better control of caching: By requiring that the browser makes multiple requests, you can set up your server to use different caching headers for each URL, depending on your requirements. In this way, you could allow users to cache the initial page render, but require that a user fetch dynamic (changing) content every time.
Cons:
User must have JavaScript enabled: This is an obvious one and I shouldn't even need to mention it, but you are cutting out a (very small) portion of your user base if you don't provide a graceful alternative to your JS-heavy site.
Complexity: This one is a little subjective, but in some ways it's just simpler to have everything in your server-side language and not require so much back-and-forth. Of course, it can go both ways.
Slow post-processing: This depends on the browser, but the fact is that if a lot of DOM manipulation or other post-processing needs to occur after retrieving the dynamic content, it might be faster to let the server do it if the server is underutilized. Most browsers are good at basic DOM manipulation, but if you have to do JSON parsing, sorting, arithmetic, etc., some of that might be faster on the server.
Server-side Rendering
This approach allows the user to receive everything at once and also caters to browsers that don't have good JavaScript support, but it also means everything takes a bit longer before the browser gets the first <html> tag.
Pros:
Content appears all at once: If your server is fast, it will render everything all at once, and that's that. No messy XmlHttpRequests (does anyone still use those directly?).
Quick post-processing: Just like you wouldn't want your application layer to do sorting of a database queryset because the database is faster, you might also want to reserve a good amount of processing on the server-side. If you design for the client-side approach, it's easy to get carried away and put the processing in the wrong place.
Cons:
Slower perceived user experience: A user won't be able to see a single byte until the server's work is all done. Sure, the server is probably going to zip through it, but it's still a few extra seconds on the user's side and you would do them a favor by rendering what you can right away.
Does not scale as well because server spends more time on requests: It might be that you really want the server to finish a request quickly and move on to the next connection.
Which of these are most important to your requirements? That should inform your decision.
I don't know backbone, but here's a simple thought: if at all possible and secure, do everything on the client instead of the server. That way the server has less work to do and can therefore handle more connections and scale better.
But #1 is much more complicated than #2.
Not really. Once you get your hang of Backbone and jQuery and client-side templating (and maybe throw CoffeeScript into the mix, too), then this is not really difficult. In fact, it greatly simplifies your server code, as all the display-related functions are now removed. You could also even have different clients (mobile version, for example) running against the same server.
Even if javascript was turned off, the items would still show because the server rendered it via a templating engine.
That is the important consideration here. If you want to support users without Javascript, then you need a non-JS version.
If you already have a non-JS version, you can think about if you still need the "enhanced" version, and if you do, if you want to re-use the server-side templating you already have coded and tested and need to maintain anyway, or duplicate the effort client-side, which adds development cost, but as you say may provide a superior experience and lower the load on the server (although I cannot imagine that fetching rendered data versus fetching XML data makes that much of a difference).
If you do not need to support users without Javascript, then by all means, render on the client.
I think Backbone's aim is to organize a Javascript in-page client application. But first of all you should take a position on the next statement:
Even if javascript was turned off, the web-app still works in "post-back mode".
Is that one of your requirements? (This is not a simple requirement.) If no, then I'll advice you: "Do more JS". But if yes then I believe your best friend is jQuery load function.
A Note: I'm a Java programmer and there's a lot of server-side frameworks that bring the ability to write applications that work ajax-ly when js is enabled and switch on post-backs when it isn't. I think Wicket and Echo2 are two of them but it's meant they are server-side libraries...

Opinion Regarding Filtering of Content using JS

I'm working on a project and there is some battle between how some JS filtering should be implemented and I would like to ask you guys some input on this.
Today we have this site that displays a long list of repeated entries of data and some JS filtering would be nice for the users to navigate through. The usual stuff: keyword, order, date, price, etc. The question is not the use of JS, which is obvious, but the origin of the data. One person defends that the HTML itself should be used and that the JS should parse through it making the user's desired filtering. Another person defends that we should use a JSON generated in the server, and that JSON should be the data's origin.
What you guys think on this? What are the pros and cons?
As a final request, I would like you to be the most informative as possible since your answers will be used and referenced for all us in the company. (Yes, that is how we trust you!:)
The right action is matter of taste and system architecture as well as utility.
I would go with dynamically generated pages with JS and JSON -- These days I think you can safely assume that most browsers has Javascript enabled -- however you may need to make provisions for crawler (GoogleBot, Bing, Ask etc) as they may not fully execute all JS and hence may not index the page if you do figure out some kind of exception for supporting those.
Using JS+JSON also means that you make your code work so that support for mobile diveces is done client side, without the webserver having to create anything special.
Doing DOM manipulation as the alternative would not be my best friend, as the logic of the page control and layout is split-up in two places -- partly in the View controller on the webserver, and partly in the JavaScript -- it is in my opinion better to have it in one place and have the view controller only generate JSON and server the root pages etc.
However this is a matter of taste, and im not sure that I would be able to say that there is one correct and best solution.
I think it's a lot cleaner if the data is delivered in JSON and then the presentation HTML or view of that data is generated from that JSON with javascript.
This fits the more classic style of keeping core data structures separate from views. In this manner you can generate all types of views without having to constantly munge/revise the way you store, access and manipulate the data. You can even build classes and methods to develop a clean interface on your data that is entirely independent of how that data is displayed.
The only issue I see with that is if the browser doesn't support javascript and that browser is a desired viewer. In that case, you have to include a default HTML version from the server that will obviously not be manipulated and the JSON will be ignored.
The middle ground is that you include both JSON and the "default", initial HTML view of that data in rendered HTML. The view comes up quickly and non-JS browsers can see something useful. But, then any future manipulation of the view (sorting, for example) uses the JSON data and generates a new clean view from the JSON data. No data is then ever "parsed" from the HTML view.
In larger projects, this also can facilitate the separation of presentation from data manipulation so different people may work on creating HTML views vs. manipulate the data (like sorting).
I would make the multiple ajax calls to the server and have it return the sorted/filtered data. If you server backend is fast than it won't be very taxing and you could even cache the data between requests.
If you only have 50-100 items than it would be reasonable to send it all to the client and have javascript sort and filter it.
Some considerations to help make the decision
Is the information sensitive and unique? (this voids and benefit to caching in my first point)
What is the most common request that will happen and are you optimizing for that?
How much data is there? (tens of rows, hundreds, thousands, millions)?
Does you site have to work with JavaScript turned off? (supporting older browsers?)
Is your development team more comfortable doing this in the front-end or back-end?
The answer is that it depends on your situation.

Just In General: JS Only Vs Page-Based Web Apps

When a developing a web app, versus a web site, what reasons are there to use multiple HTML pages, rather than using one html page and doing everything through Javascript?
I would expect that it depends on the application -- maybe -- but would appreciate any thoughts on the subject.
Thanks in advance.
EDIT:
Based on the responses here, and some of my own research, if you wanted to do a single-page, fully JS-Powered site, some useful tools would seem to include:
JQuery Plug Ins:
JQuery History:
http://balupton.com/projects/jquery-history
JQuery Address:
http://plugins.jquery.com/project/jquery-address
JQuery Pagination:
http://plugins.jquery.com/project/pagination
Frameworks:
Sproutcore
http://www.sproutcore.com/
Cappucino
http://cappuccino.org/
Possibly, JMVC:
http://www.javascriptmvc.com/
page based applications provide:
ability to work on any browser or device
simpler programming model
they also provide the following (although these are solvable by many js frameworks):
bookmarkability
browser history
refresh or F5 to repeat action
indexability (in case the application is public and open)
One of the bigger reasons is going to be how searchable your website is.
Doing everything in javascript is going to make it complicated for search engines to crawl all content of your website, and thus not fully indexing it. There are ways around this (with Google's recent AJAX SEO guidelines) but I'm not sure if all search engines support this yet. On top of that, it's a little bit more complex then just making separate pages.
The bigger issue, whether you decide to build multiple HTML pages, or you decide to use some sort of framework or CMS to generate them for you, is that the different sections of your website have URL's that are unique to them. E.g., an about section would have a URL like mywebsite.com/about, and that URL is used on the actual "about" link within the website.
One of the biggest downfalls of single-page, Ajax-ified websites is complexity. What might otherwise be spread across several pages suddenly finds its way into one huge, master page. Also, it can be difficult to coordinate the state of the page (for example, tracking if you are in Edit mode, or Preview mode, etc.) and adjusting the interface to match.
Also, one master page that is heavy on JS can be a performance drag if it has to load multiple, big JS files.
At the OP's request, I'm going to discuss my experience with JS-only sites. I've written four relevant sites: two JS-heavy (Slide and SpeedDate) and two JS-only (Yazooli and GameCrush). Keep in mind that I'm a JS-only-site bigot, so you're basically reading John Hinkley on the subject of Jody Foster.
The idea really works. It produces gracefully, responsive sites at very low operational costs. My estimate is that the cost for bandwidth, CPU, and such goes to 10% of the cost of running a similar page-based site.
You need fewer but better (or at least, better-trained) programmers. JavaScript is an powerful and elegant language, but it has huge problems that a more rigid and unimaginative language like Java doesn't have. If you have a whole bunch of basically mediocre guys working for you, consider JSP or Ruby instead of JS-only. If you are required to use PHP, just shoot yourself.
You have to keep basic session state in the anchor tag. Users simply expect that the URL represents the state of the site: reload, bookmark, back, forward. jQuery's Address plug-in will do a lot of the work for you.
If SEO is an issue for you, investigate Google Ajax Crawling. Basically, you make a very simple parallel site, just for search engines.
When would I not use JS-only? If I were producing a site that was almost entirely content, where the user did nothing but navigate from one place to another, never interacting with the site in a complicated manner. So, Wikipedia and ... well, that's about it. A big reference site, with a lot of data for the user to read.
modularization.
multiple files allows you to mre cleanly break out different workflow paths and process parts.
chances are your Business Rules are something that do not usually directly impact your layout rules and multiple files would better help in editing on what needs to be edited without the risk of breaking something unrelated.
I actually just developed my first application using only one page.
..it got messy
My idea was to create an application that mimicked the desktop environment as much as possible. In particular I wanted a detailed view of some app data to be in a popup window that would maintain it's state regardless of the section of the application they were in.
Thus my frankenstein was born.
What ended up happening due to budget/time constraints was the code got out of hand. The various sections of my JavaScript source got muddled together. Maintaining the proper state of various views I had proved to be... difficult.
With proper planning and technique I think the 'one-page' approach is a very easy way to open up some very interesting possibilities (ex: widgets that maintain state across application sections). But it also opens up many... many potential problem areas. including...
Flooding the global namespace (if you don't already have your own... make one)
Code organization can easily get... out of hand
Context - It's very easy to
I'm sure there are more...
In short, I would urge you to stay away from relying on JavaScript dependency for the compatibility issue's alone. What I've come to realize is there is simply no need rely on JavaScript to everything.
I'm actually in the process of removing JavaScript dependencies in loo of Progressive Enhancement. It just makes more sense. You can achieve the same or similar effects with properly coded JavaScript.
The idea is too...
Develop out well-formatted, fully functional application w/o any JavaScript
Style it
Wrap the whole thing with JavaScript
Using Progressive Enhancement one can develop an application that delivers the best possible experience for the user that is possible.
For some additional arguments, check out The Single Page Interface Manifesto and some (mostly) negative reaction to it on Hacker News (link at the bottom of the SPI page):
The Single Page Interface Manifesto: http://itsnat.sourceforge.net/php/spim/spi_manifesto_en.php
stofac, first of all, thanks for the link to the Single Page Interface (SPI) Manifesto (I'm the author of this boring text)
Said this, SPI != doing everything through Javascript
Take a look to this example (server-centric):
http://www.innowhere.com/insites/
The same in GAE:
http://itsnatsites.appspot.com/
More info about the GAE approach:
http://www.theserverside.com/news/thread.tss?thread_id=60270
In my opinion coding a complex SPI application/web site fully on JavaScript is very very complex and problematic, the best approach in my opinion is "hybrid programming" for SPI, a mix of server-centric for big state management and client-centric (a.k.a JavaScript by hand) for special effects.
Doing everything on a single page using ajax everywhere would break the browser's history/back button functionality and be annoying to the user.
I utterly despise JS-only sites where it is not needed. That extra condition makes all the difference. By way of example consider the oft quoted Google Docs, in this case it not only helps improve experiences it is essential. But some parts of Google Help have been JS-only and yet it adds nothing to the experience, it is only showing static content.
Here are reasons for my upset:
Like many, I am a user of NoScript and love it. Pages load faster, I feel safer and the more distracting adverts are avoided. The last point may seem like a bad thing for webmasters but I don't want anyone to get rewarded for pushing annoying flashy things in my face, if tactless advertisers go out of business I consider it natural selection.
Obviously this means some visitors to your site are either going to be turned away or feel hassled by the need to provide a temporary exclusion. This reduces your audience.
You are duplicating effort. The browser already has a perfectly good history function and you shouldn't need to reinvent the wheel by redrawing the previous page when a back button is clicked. To make matters worse going back a page shouldn't require re-rendering. I guess I am a student of If-it-ain't-broke-don't-fix-it School (from Don't-Repeat-Yourself U.).
There are no HTTP headers when traversing "pages" in JS. This means no cache controls, no expiries, content cannot be adjusted for requested language nor location, no meaningful "page not found" nor "unavailable" responses. You could write error handling routines within your uber-page that respond to failed AJAX fetches but that is more complexity and reinvention, it is redundant.
No caching is a big deal for me, without it proxies cannot work efficiently and caching has the greatest of all load reducing effects. Again, you could mimic some caching in your JS app but that is yet more complexity and redundancy, higher memory usage and poorer user experience overall.
Initial load times are greater. By loading so much Javascript on the first visit you are causing a longer delay.
More JavaScript complexity means more debugging in various browsers. Server-side processing means debugging only once.
Unfuddle (a bug-tracker) left a bad taste. One of my most unpleasant web experiences was being forced to use this service by an employer. On the surface it seems well suited; the JS-heavy section is private so doesn't need to worry about search engines, only repeat visitors will be using it so have time to turn off protections and shouldn't mind the initial JS library load.
But it's use of JS is pointless, most content is static. "Pages" were still being fetched (via AJAX) so the delay is the same. With the benefit of AJAX it should be polling in the background to check for changes but I wouldn't get notified when the visible page had been modified. Sections had different styles so there was an awkward re-rendering when traversing those, loading external stylesheets by Javascript is Bad Practice™. Ease of use was sacrificed for whizz-bang "look at our Web 2.0" features. Such a business-orientated application should concentrate on speed of retrieval, but it ended up slower.
Eventually I had to refuse to use it as it was disrupting the team's work flow. This is not good for client-vendor relationships.
Dynamic pages are harder to save for offline use. Some mobile users like to download in advance and turn off their connection to save power and data usage.
Dynamic pages are harder for screen readers to parse. While the number of blind users are probably less than those with NoScript or a mobile connection it is inexcusable to ignore accessibility - and in some countries even illegal, see the "Disability Discrimination Act" (1999) and "Equality Act" (2010).
As mentioned in other answers the "Progressive Enhancement", née "Unobtrusive Javascript", is the better approach. When I am required to make a JS-only site (remember, I don't object to it on principle and there are times when it is valid) I look forward to implementing the aforementioned AJAX crawling and hope it becomes more standardised in future.

Thinking of new way of building a db page populating data via api calls - are there any issues doing it this way

Up to know, for DB driven web sites, I've used php (and CodeIgniter) to populate the data within the page prior to rendering, what I'm thinking about doing now is to develop a javascript (via jquery) page, make it as interactive as possible and then connect to the db through ajax/json calls - so NO data populated to the screen prior to rendering.
WHY? sort of an idea that I can, some day, hook the same web page to different data sources - a true separation of page from data - linking only via ajax.
I think the biggest issue could be performance...are there other things to watch out for? What's the best approach to handling security (stateless/sessionless)?
The biggest question is accessibility. What about those people using screenreaders, for which Javascript doesn't work? What about those on mobile phones (non-smartphones), again with very limited or no Javascript functionality? What about those people who have simply disabled JS? Event these days, you simply can't assume that everyone can use JS.
I like the original idea, but perhaps this would be better done via a simple server-side wrapper, which calls out to your data source but which can be quickly and easily changed to point at a different one.
Definitely something I've considered doing but you'd probably want to develop some kind of framework (or see if someone already has) if you're going to do this. Brute forcing this kind of thing will lead to a lot of redundant code and unnecessary hair loss. Perhaps a jQuery plugin? I'd be very interested to see what you came up with.

Categories