Memory mapped equivalent for FirefoxOS - javascript

How would you emulate a memory mapped file in FirefoxOS, Tizen or any other mobile pure-JS solution?
The use case is for a mobile browser and you need lots of data which does not fit in the RAM or you don't want to waste RAM for it yet and prefer to lazy load it.
The only thing I found is IndexedDB or what can I do about it? Any better tricks or APIs?
Hmmh it looks like Web SQL Database could be also a solution on Android, Tizen or iOS. But Firefox does not support it (?)
Update: I'm asking because of some experiments

First thing first, Web SQL won't be ever standardised as explained in the specification, so it should be considered only for WebKit/Blink-based browsers.
There is an awesome overview of offline storage options in this queston, even though the map tiles are considered in that question, I think it is still relevant to your use case.
I believe you are on the right track with IndexedDB for the graph data. On a high level it is a key-value asynchronous object store (see the Basic Concepts document). For your use case, you could index graph nodes in an object store. There is, for example, LevelGraph library which stores graph data in IndexedDB, though it is built for Semantic Web triples. HeliosJS is also worth mentioning, though it is an in-memory graph database.
Edit: Current API for IndexedDB is asynchronous. There is synchronous API drafted in the spec, which could be used only in web workers. Unfortunately no engine currently implements this feature. There is a pending patch for Gecko, but I did not find any plans for Blink or WebKit, so it is not a meaningful option now.
It is possible to access raw files through Web APIs. You could use XHR2 to load a (local) file as a binary Blob. Unfortunately XHR2 is mostly designed for streaming files and not for a random access, though you could split-up the data into multiple files and request them on demand, but that may be slow.
The direct access to files is currently quite limited, FileList and createObjectURL are primarily used for direct file user input (through Drag and Drop or file input field), FileSystem API was recently killed, and the DeviceStorage is non-standard and privileged (Firefox OS-specific). You can also store files in IndexedDB, which is described for FileHandle API. However, once you manage to get access to the raw File object, you can use the Blob.slice method to load chunks of the file – there is a great example of reading file chunks via upload form.
You may also want to look at jDataView library & friends, which eases handling of binary data through the more efficient ArrayBuffer.
Edit: As for the synchronous API, localStorage (aka DOM Storage) could be considered too. It is also a key-value storage, but much much simpler and more limited than IndexedDB:
Size of the storage is limited, usually to 5 MB
Only one localStorage per domain/application (you can have multiple named object stores in IndexedDB).
Only strings can be stored.
In general, localStorage is useful cookies replacement, but it is not really useful for storing large offline data.
So to sum it up:
IndexedDB is the easiest and widely available option, though it may be slow, inefficient or hit memory limits with very large data; also, only asynchronous API is currenlty possible.
Raw file access is hard to obtain without user interaction and the APIs are unstable and non-standard.
In the end, you can combine both approaches, two options come in mind:
Use XHR2 to parse the large file in chunks and store the parsed nodes into IndexedDB
Store the large file into IndexedDB (via XHR), use FileHandle.getFile to load the File object and Blob.slice to read its content.
In all cases, you can (should) use Web Workers to handle data manipulation and calculations in the background.
Anyway, GraphHopper looks great, we really lack such non-trivial offline applications for Firefox OS, so good luck!

Related

How much data should we cache in memory in single page applications?

I was curious to know if there is any limit for data caching in
Single page applications using shared service or ngrx.
Does caching too much data on front end impacts the overall
performance of web Application (DOM).
Lets say I have a very big complex nested object which I am caching in memory
Now assume that I want to use different subsets of object in different modules/components of our
application and for that I may need to do lot of mapping operations(using loops by matching the id's etc) on UI.
I was thinking in other way around that instead of doing so much operations on UI to extract the
relevant data why don't I use a simple API with having id parameter to fetch the relevant information if its not taking much time to get the data from backend.
url = some/url/{id}
So is it worth to cache more complex nested objects if we cant use its subset simply by its properties
obj[prop] and need to do lot of calculations on UI (looping etc) which actually is more time consuming than getting the data from rest API ?
Any help/explanation will be appreciated !!!
Thanks
Caching too much data in memory is not a good idea. It will affect your application performance. Causes performance degradation in a system having less memory.
theoretically cache memory is for keeping less amount of data. The maximum support size is 2GB. I think chrome is also supported up to that limit.
For keeping data of big size in client-side never use memory cache instead you should use client-side database/datastore. It uses disk space instead of memory.
There are number of web technologies that store data on the client-side like
Indexed Database
Web SQL
LocalStorage
Cookies
Depending upon the client application framework it can be decided.
By default browser uses 10% of disk space for these data stores. We also have option to increase that size.

Looking for examples of accesing IndexedDB from several scripts

I'm trying to implement job processor using workers in background.
I will store some job-related information in IndexedDB.
I tried to find some information, related to accessing same IndexedDB database from multiple scripts, multiple workers in my case, stuff with version change explained in that case, but could find anything useful.
I need some information on that topic...
You can look at my IDB library as an example of IDB in Web Workers.
Things to note:
IDB in a Web Worker does not work in Firefox. Although the spec says it should allow async access (not to mention non-existing sync access), and the Mozilla IDB dev is championing their ticket for this, Mozilla's bug tracker suggests to me lots of issues need to be worked out and that this won't be available in the near future (as of 3/2014)
Like IDB when it stores data, Web Workers use a structured clone algo to pass data between worker thread and parent. This means that all your objects need to be cloneable. So you need to transform IDBObjectstore, DOMStringList, etc. into plain vanilla JS objects.
Otherwise, IDB in Web Workers is great. Personally I think this is the best way to fetch data without any chance of locking the UI.

Caching text/image assets in performance-constrained environments

I'm working on an extremely performance-constrained devices. Because of the overhead of AJAX requests, I intend to aggressively cache text and image assets in the browser, but I need to configure the cache size per-device to as low as 1MB of text and 9MB of images -- quite a challenge for a multi-screen, graphical application.
Because the device easily hits the memory limit, I must be very cautious about how I manage my application's size: code file size, # of concurrent HTTP requests, # of JS processor cycles upon event dispatch, limiting CSS reflows, etc. My question today is how to develop a size-restrained cache for text assets and images.
For text, I've rolled my own cache using JSON.encode().length for objects and 'string'.length to approximate size. The application manually gets/sets cache entries. Upon hitting a configurable upper limit, the class garbage collects itself from gcLimit to gcTarget sizes, giving weight to the last-accessed properties (i.e., if something has been accessed recently, skip collecting that object the first time around).
For images, I intend to preload interface elements and let the browser deal with garbage collection itself by removing DOM elements and never persistently storing Image() objects. For preloading, I will probably roll my own again -- I have examples to imitate like FiNGAHOLiC's ImgPreloader and this. I need to keep in mind features like "download window size" and "max cache requests" to ensure I don't inadvertently overload the device.
This is a huge challenge working in such a constrained environment, and common frameworks like Backbone don't support "max Collection size". Elsewhere on SO, users quote limits of 5MB for HTML5 localStorage, but my goal is not session persistence, so I don't see the benefit.
I can't help feeling there might be better solutions. Ideas?
Edit: #Xotic750: Thanks for the nod to IndexedDB. Sadly, this app is a standard web page built on Opera/Presto. Even better, the platform offers no persistence. Rock and a hard place :-/.
localStorage and sessionStorage (DOM Storage) limits do not apply (or can be overridden) if the application is a browser extension (you don't mention what your application is).
localStorage is persistent
sessionStorage is sessional
Idea
Take a look at IndexedDB it is far more flexible though not as widely supported yet.
Also, some references to Chrome storage
Managing HTML5 Offline Storage
chrome.storage
With modern javascript engines cpu/gpu performance is not an issue for most apps (except games, heavy animation or flash) on even low powered devices so I suspect your primary issues are memory and io. Optimising for one typically harms the other but I suspect that the issues below will be your primary concern.
I'm not sure you have any control over the cache usage of the browser. You can limit the memory taken up by the javascript app using methods like those you've suggested but the browser will still do it's own thing and that is probably the primary issue in terms of memory. By creating your own caches you will probably be actually duplicating data that is already cached by the browser and so exacerbate the memory problem. The browser (unless you're using something obscure) will normally do a better job of caching than is possible in javascript. In any case, I would strongly recommend letting the browser take care of garbage collection as there is no way in javascript to actually force browsers to free up the memory (they do garbage collection when they want, not when you tell them to). Do you have control over which browser is used on the device? If you do, then changing that may be the best way to reduce memory usage (also can you limit the size of the browser cache?).
For ajax requests ensure you fully understand the difference between GET and POST as this has big implications for caching on the browser and on proxies routing messages around the web (and therefore also affects the logic of your app). See if you can minimise the number of requests by grouping them together (JSON helps here). It is normally latency rather than bandwidth that is the issue for AJAX requests (but don't go too far as most browsers can do several requests concurrently). Ensure you construct your ajax manager to allow prioritisation of requests (i.e stuff that affects what the user sees is prioritised over preloading which is prioritised over analytics - half the web has a google analytics call the first thing that happens after page load, even before ads and other content is loaded).
Beyond that, I would suggest that images are likely to be the primary contributor to memory issues (I doubt code size even registers but you should ensure code is minimised with e.g. google closure). Reduce image resolutions to the bare minimum and experiment with file formats (e.g. gif or png might be significantly smaller than jpeg for some images (cartoons, logos, icons) but much larger for others (photos, gradients).
10MB of cache in your app may sound small but it is actually enormous compared with most apps out there. The majority leave caching to the browser (which in any case will probably still cache the data whether you want it to or not).
You mention Image objects which suggests you are using the canvas. There is a noticeable speed improvement if you create a new canvas to store the image (after which you can discard the Image object). You can use this canvas as the source of any image data you later need to copy to a canvas and as no translation between data types is required this is much faster. Given canvas operations often happen many times a frame this can be a significant boost.
Final note - don't use frameworks / libraries that were developed with a desktop environment in mind. To optimise performance (whether speed or memory) you need to understand every line of code. Look at the source code of libraries (many have some very clever optimised code) but assume that, in general, you are a special case for which they are not optimised.

Is there any way to automatically synchronize html5 localstorage between computers

I have a simple offline html5/javascript single-html-file web application that I store in my dropbox. It's a sort of time tracking tool I wrote, and it saves the application data to local storage. Since its for my own use, I like the convenience of an offline app.
But I have several computers, and I've been trying to come up with any sort of hacky way to synchronize this app's data (which is currently using local storage) between my various machines.
It seems that chrome allows synchronization of data, but only for chrome extensions. I also thought I could perhaps have the web page automatically save/load its data from a file in a dropbox folder, but there doesn't appear to be a way to automatically sync with a specific file without user prompting.
I suppose the "obvious" solution is to put the page on a server and store the data in a database. But suppose I don't want a solution which requires me to maintain apps on a server - is there another way, however hacky, to cobble together synchronization?
I even looked for a while to see if there was a vendor offering a web database service - where I could, say, post/get a blob of json on demand, and then somehow have my offline app sync with this service, but the same-origin policy seems to invalidate that plan (and besides I couldn't find such a service).
Is there a tricky/sneaky solution to this problem using chrome, or google drive, or dropbox, or some other tool I'm not aware of? Or am I stuck setting up my own server?
I have been working on a Project that basically gives you versioned localStorage with support for conflict resolution if the same resource ends up being edited by two different clients. At this point there are no drivers for server or client (they are async in-memory at the moment for testing purposes) but there is a lot of code and abstraction to make writing your own drivers really easy... I was even thinking of doing a dropbox/google docs driver myself, except I want DynamoDB/MongoDB and Lawnchair done first.
The code is not dependent on jQuery or any other libraries and there's a pretty full features (though ugly) demo for it as are well.
Anyway the URL is https://github.com/forbesmyester/SyncIt
Apparently, I have exactly the same issue and invetigated it thoroghly. The best choice would be remoteStorage, if you could manage to make it work. It allows to use 3rd party server for data storage or run your own instance.

Is there anything bigger than HTML5 localStorage?

I'd like to store local data on the client side to speed up page loads of my web-app. I tried with HTML5's localStorage but unfortunately it's too small for my needs. Is there anything bigger?
You have several options, but browser support differs and as such you will need to abstract the differences away from the browser by an extra layer on top of the various browser mechanisms
Local Storage is supported by pretty much everything, and normally stores 5-10MB.
IndexedDB is supported by most, but not all desktop browsers and can store lots more data. How much depends on the browser, but expect something like 50MB or unlimited.
WebSQL is the only way to go if you aim for iOS and Android browsers, as they don't support indexeddb. You can store up to 50MB there.
Elegant abstraction layers that does all of this for you are many: check out the answers in this thread.
If the data is static in nature you might be able to cache some in a .js file or use jsonp instead of local storage.
Otherwise your only option right now is local storage with a storage limit of 5 - 10 megabytes depending on the browser. You might be able to get more out of this by compressing your data with something like this: https://github.com/olle/lz77-kit/blob/master/src/main/js/lz77.js

Categories