Scenario: In order to make lighthouse analytics happy (ie reduce download, blocking, etc. etc. etc.) a web app has been chopped up so that previously in-page dialogues/sequences
are now independent, subsequently loaded pages.
Naturally this means that a single library link download has now become a download for each of the resulting dialog pages. As far as download size, any caching mechanism will ameliorate any multiplier. Although v8/closure has been used to massively shrink the download size - this leaves the blocking problem of reparsing and recompiling for every instance (when only one parse/compile on the target/client should actually be needed, if we were optimal).
So, the question is, is there any mechanism to flag library code such that the v8 engine
retains and reuses the p-code it parsed the first time?? [Note, a ServiceWorker implementation is not the answer for the library/framework.]
V8, in combination with the way it's embedded into Chrome, does have the ability to cache code. You don't need to flag anything; just like HTTP caching, the browser has certain heuristics to decide by itself whether/when to cache things.
See if somebody were to create a large application based on WebGL. Let's say that it's a 3D micro management game which by itself take approximately 700 Megabytes in files to run it.
How would one deal with the loading of the assets. I would have assumed that it would have to be done asynchronously but I am unsure how exactly it would work.
P.S. I am thinking RollerCoaster Tycoon as an example, but really it's about loading large assets from server to browser.
Well first off, you dont want your users to download 700 megabytes of data, at least not at once.
One should try to keep as many resources(geometry, textures) as possible procedural.
All data that needs to be downloaded should be loaded in a progressive/on demand manner using multiple web workers
since one will probably still need to process the data with javascript which can become quite cpu heavy when having many resources.
Packing the data into larger packages may also be advisable to prevent request overhead.
Sure thing one would gzip all resources and try to preload data as soon as the user hits the website. When using image textures and/or text content, embedding it into the html(using <img> and <script> tags) allows to exploit the browser cache to some extend.
Using WebSQL/IndexedDB/LocalStorage can be done but due to the currently very low quotas and flaky/not existing implementation of the quota management api its not a feasable solution right now.
I'm working on a memory intensive web app (on the order of several hundred megabytes to a gigabyte or more) and I'm trying to figure out what my options are for memory management. There doesn't seem to be any way to figure out if my application is approaching the memory limit of the browser / Javascript engine, and once the app exceeds that limit the browser just crashes. I could just be super conservative with the amount of memory I use in order to support browsers running on low end machines, but that will sacrifice performance on higher end machines. I know Javascript was never designed to be able to use large amounts of memory, but here we are now with HTML5, canvas, WebGL, typed arrays, etc and it seems a bit short sighted that there isn't also a way in Javascript to determine how much memory a script is able to use. Will something like this be added to browsers in the future, or is there a browser specific API available now? What are my options here?
Update
I'm not sure that it matters, but for what it's worth I'm displaying and manipulating hundreds of large images, in file formats not supported by web browsers, so I have to do all of the decompression in Javascript and cache the decompressed pixel data. The images will be displayed on a canvas one at a time and the user can scroll through them.
I have a web page that references and initializes multiple instances of the same ASP.NET generic user control.
What I want to do, is to cache/store the entire contents (html) of those controls somewhere on the client using the jquery detach() method.
The solution of localStorage does not fit here as it has a limit to 5MB which is low for my needs.
For now, i am using a global variable and more specific a javascript array(key-value) to store the data.
What do you think of this solution? Will I notice any lags or performance issues on browser? Is there any limit for that global var?
Also, is there a better way to implement such a task?
For cross browser compatibility, you can try an AJAX call that pulls/increments in your massive data and cache the call (stored as JSON/JSONP). jQuery has a cache mechanism but the meat of the implementation is going to be on the headers of the page call. Specifically you're going to want to add Expires, Last-Modified, and Cache-Control on the pages you AJAX in.
Then you'll want to pull in the data asynchronously and do the appropriate UI manipulation (if needed).
You don't want to store massive data in a single variable since its going to take longer when it goes through the JS process.
localStorage is still an edge technology, is implemented differently across vendors, and isn't backwards compatible (although there are JavaScript libs that help mitigate backwards compatibility)
Cookies not big enough
On-Page JSON or JS Variable You lose abstraction and increase initial page weight (which is going to be unsatisfactory if you're on mobile)
Whatever implementation you do, I would run some simple benchmark performance tests so you have the metric to backup your code
This will cause browser lag and multiple issues. You can pretty much guarantee that a mobile browser isn't going to work in this scenario because no sensible mobile browser is going to let you download and store 5MB+ in the LocalStorage object. It is a really bad idea to put 5MB+ of HTML into the DOM of any browser and not expect any kind of performance issue.
If you're not concerned about mobile, then look at IndexedDB. It allows a greater amount of storage and it persists even after the session is closed. It is fairly well supported in recent Chrome and Firefox browsers, but requires IE10 or higher.
I want to design a web application whose only interface is json i.e. all the http requests receive responses only in json format and dont render any html on the server side. All the form posts convert the form data into a json object and then post it as a string. All the rendering is done by client side javascript.
The one downside of this approach I know is that browsers without javascript wont be able to do much with this architecture but the interaction on the site is rich enough to be useless to non-javascript browsers anyway.
Are there any other downsides of this approach of designing a web application?
It's an increasingly-common pattern, with tools such as GWT and ext-js. Complex web apps such as GMail have been over 90% JS-created DOM for some time. If you are developing a traditional 'journal' type website with mainly written content to be read this approach will be overkill. But for a complex app that wishes to avoid page refreshes it may well be appropriate.
One downside is that not only does it require a browser that supports JavaScript, it is also easy for the computing resources required by the app to creep up to the point where it needs quite a powerful browser. If you develop in Chrome on a top-end PC you might come to run the app on a less powerful machine such as a netbook or mobile device and find it has become quite sluggish.
Another downside is you lose the opportunity to use HTML tools when working on your pages, and that viewing your application's pages' DOM trees under Firebug or Chrome Developer Tools may be hard work because the relationship between the elements and your code may not be clear.
Edit: another thing to consider is that it is more work to make pages more accessible, as keyboard shortcuts may have to be added (you may not be able to use the browser built in behavior here) and users with special needs may find it more difficult to vary the appearance of the app, for instance by increasing font size.
Another edit: it's unlikely now text content on your website will be successfully crawled by search engines. For this reason you sometimes see server created text-only pages representing the same content, that refer browsers to the JS-enabled version of the page.
Other than the issue you point out, there's another: Speed. But it's not necessarily a big issue, and in fact using JSON rather than HTML may (over slower connections) improve rather than hamper speed.
Web browsers are highly optimised to render HTML, both whole pages (e.g., normally) and fragments (e.g., innerHTML and the various wrappers for it, like jQuery's html or Prototype's update). There's a lot you can do to minimize the speed impact of working through your returned data and rendering the result, but nothing is going to be quite as fast as grabbing some HTML markup from the server and dumping it into the browser for display.
Now, that said, it's not necessarily going to be a big problem at all. If you use efficient techniques (some notes in this article), and if you primarily render the results by building up HTML strings you're then going to hand to the brower (again, via innerHTML or wrappers for it), or if you're rending only a few elements at a time, it's unlikely that there will be any perceptible speed difference.
If, instead, you build up substantial trees by creating and appending individual elements via the DOM API or wrappers for it, you're very likely to notice a performance impact. That seems like the right thing to do, but it involves lots of trips across the DOM/JavaScript boundary and means the browser has to present the DOM version of things to your code at all intermediate steps; in contrast, when you hand it an HTML string, it can do its thing and romp through it full-speed-ahead. You can see the difference in this performance test. It's substantial.
Over slower connections, the speed impact may be made up for or even overcome if the JSON data is more compact than the HTML would have been, because of the smaller size on the wire.
You've got to be more mindful of high-latency, low-bandwidth connections when you're building something like this. The likelihood is, you're going to be making a lot of Ajax calls to sync data and grab new data from the server, and the lag can be noticeable if there's a lot of latency. You need a strategy in place to keep the user informed about the progress of any communication between client and server.
In development, it's something that can be overlooked, especially if you're working with a local web server, but it can be killer in production. It means looking into prefetching and caching strategies.
You also need an effective way to manage HTML fragments/templates. Obviously, there are some good modules out there for rendering templates - Mustache.js, Underscore template, etc. - but keeping on top of the HTML fragments can cause some maintenance headaches. I tend to store the HTML templates in separate files, and load them dynamically via Ajax calls (plus caching to minimise HTTP requests).
Edit - another con:
Data syncing - if you use a server database as your data "authority" then it's important to keep data in sync between the server and client. This is even more relevant if changes to data on one client affects multiple clients. You then get into the realms of dealing with realtime, asynchronous updates, which can cause some interesting conceptual challenges. (Fortunately, using frameworks and libraries such as Socket.IO and Backbone.js can really make things easier).
Edit - pros:
There are some huge advantages to this type of application - it's far more responsive, and can really enhance the user experience. Trivial actions that would normally require a round-trip to the server and incur network overhead can now be performed quickly and seamlessly.
Also, it allows you to more effectively couple data to your views. Chances are, if you're handling the data on the client-side, you will have a framework in place that allows you to organise the data and make use of an ORM - whether its Backbone.js, Knockout.js or something similar. You no longer have to worry about storing data in html attributes or in temporary variables. Everything becomes a lot more manageable, and it opens the door for some really sophisticated UI development.
Also also, JavaScript opens up the possibility for event-driven interaction, which is the perfect paradigm for highly interactive applications. By making use of the event loop, you can hook your data directly to user-initiated and custom events, which opens up great possibilities. By hooking your data models directly to user-driven events, you can robustly handle updates and changes to data and render the appropriate output with minimal fuss. And it all happens at high speed.
I think the most important thing is what is your requirement, if you want to build a interactive application, giving desktop like feel then go for client side development. Using Javascript framework like backbone.js or knockout.js will really help in organizing and maintaining the code. The advantages are already detailed out in previous answers. As respect to the performance in rendering with respect to server side rendering is concerned here is a nice blog post which made thinking.
http://openmymind.net/2012/5/30/Client-Side-vs-Server-Side-Rendering/