I'm trying to implement job processor using workers in background.
I will store some job-related information in IndexedDB.
I tried to find some information, related to accessing same IndexedDB database from multiple scripts, multiple workers in my case, stuff with version change explained in that case, but could find anything useful.
I need some information on that topic...
You can look at my IDB library as an example of IDB in Web Workers.
Things to note:
IDB in a Web Worker does not work in Firefox. Although the spec says it should allow async access (not to mention non-existing sync access), and the Mozilla IDB dev is championing their ticket for this, Mozilla's bug tracker suggests to me lots of issues need to be worked out and that this won't be available in the near future (as of 3/2014)
Like IDB when it stores data, Web Workers use a structured clone algo to pass data between worker thread and parent. This means that all your objects need to be cloneable. So you need to transform IDBObjectstore, DOMStringList, etc. into plain vanilla JS objects.
Otherwise, IDB in Web Workers is great. Personally I think this is the best way to fetch data without any chance of locking the UI.
Related
I'm working on an application where I have multiple webworkers running together. These webworkers are developed by third parties, and are not trusted. They provide postmessage APIs to each other.
I would like to enable the webworkers to have safe access to local storage. IndexedDB is the standard choice, however I need to ensure that a malicious webworker cannot interfere with the data of another webworker.
My original idea was that I could 'domain' each webworker somehow. Each one gets access to its own piece of IndexedDB, and cannot see the storage put in other pieces by other webworkers. At the moment, I do not believe this is possible since I need the workers to exist together in one iframe.
My next idea was to have a single, trusted webworker that has IndexedDB access, and set up sandbox rules for all of the other webworkers such that they can't use IndexedDB at all, but instead must communicate with the API of the trusted webworker to store and retrieve local data. My current understanding is that I can get this to work if I use two iframes, where the first iframe has access to IndexedDB and runs the trusted webworker, and the second iframe is in a different domain where non-malicious webworkers know not to use the storage.
I am not a huge fan of the two iframe solution - it's complex, has performance overheads, and requires webworker devs to know they can't safely use localstorage even though they actually have access - and I'm looking for a better way to sandbox specific webworkers away from indexeddb.
I am using Cloud Firestore in a React Native app and I am trying to reduce the read/writes operations to a minimum. I just thought of using a local DB so that all data fetched from the cloud are saved in the local storage but I would add a snapshot listener to listen for changes whenever the user starts the app.
Is this a good approach for what I am aiming? If not, why? And if yes, do you have any suggestion related to its implementation?
I feel compelled to point out that the other (currently accepted) answer here is flat out incorrect, or at least misleading for a few reasons.
First, Firestore doesn't use HTTP, and the results of queries are never going to be maintained by your typical browser cache. The claims the answer makes about HTTP caching semantics simply do not apply.
Second, the Firestore SDK uses an internal cache, which is enabled by default on Android and iOS, because its sense of cache is almost always going to benefit the end user. Web applications would do well to enable this cache as well. It requires one line of code. This cache will be queried when the client is offline, an can be queried directly if cached results are desired.
Third, adding an additional layer of cache or persistence is actually very necessary for applications that must be fully usable offline. Firestore was not designed to be use fully offline, so having a local-first option is necessary for some applications. The additional cache can be synchronized with Firebase as a sort of cloud backup.
All told, the question is technically too broad for Stack Overflow, and it requires conversation to understand if it's worthwhile to enable Firstore's cache, or add an additional cache on top of that. But it's not patently false that client caching is a bad idea.
No, it's not a good approach.
Caching data is generally a good idea, however implementing this at the DBMS tier will involve writing a lot of code to implement a caching mechanism you have yet to define. The reason it's a bad idea is because JavaScript running on a client already has access to a data tier with very well defined caching semantics already implemented in the the runtime environment - http
I am migrating my script from Google Chrome Extension to node.js
And I simply need to store a couple of variables, nothing fancy and performance isn't an issue either, since they would only be accessed when the script is restarted.
In Google Chrome Extension I would use the client side HTML5 storage (localStorage)
However as a server language node.js doesn't have this feature and it's not surprising.
I could of course install some database and being particularly familiar with MySQL this is not an issue, but, if there is a simple way of storing my configs - I would much like to try it out.
If u get experience with localStorage you can use node-localstorage.
I'd recommend using sw-precache, aka service workers. They run in the browser and can be used by the fetch api. Here's a great node_module that will accomplish your needs.
https://github.com/GoogleChrome/sw-precache
Here is a codelab that explains service workers in more detail:
https://codelabs.developers.google.com/codelabs/sw-precache/index.html
How would you emulate a memory mapped file in FirefoxOS, Tizen or any other mobile pure-JS solution?
The use case is for a mobile browser and you need lots of data which does not fit in the RAM or you don't want to waste RAM for it yet and prefer to lazy load it.
The only thing I found is IndexedDB or what can I do about it? Any better tricks or APIs?
Hmmh it looks like Web SQL Database could be also a solution on Android, Tizen or iOS. But Firefox does not support it (?)
Update: I'm asking because of some experiments
First thing first, Web SQL won't be ever standardised as explained in the specification, so it should be considered only for WebKit/Blink-based browsers.
There is an awesome overview of offline storage options in this queston, even though the map tiles are considered in that question, I think it is still relevant to your use case.
I believe you are on the right track with IndexedDB for the graph data. On a high level it is a key-value asynchronous object store (see the Basic Concepts document). For your use case, you could index graph nodes in an object store. There is, for example, LevelGraph library which stores graph data in IndexedDB, though it is built for Semantic Web triples. HeliosJS is also worth mentioning, though it is an in-memory graph database.
Edit: Current API for IndexedDB is asynchronous. There is synchronous API drafted in the spec, which could be used only in web workers. Unfortunately no engine currently implements this feature. There is a pending patch for Gecko, but I did not find any plans for Blink or WebKit, so it is not a meaningful option now.
It is possible to access raw files through Web APIs. You could use XHR2 to load a (local) file as a binary Blob. Unfortunately XHR2 is mostly designed for streaming files and not for a random access, though you could split-up the data into multiple files and request them on demand, but that may be slow.
The direct access to files is currently quite limited, FileList and createObjectURL are primarily used for direct file user input (through Drag and Drop or file input field), FileSystem API was recently killed, and the DeviceStorage is non-standard and privileged (Firefox OS-specific). You can also store files in IndexedDB, which is described for FileHandle API. However, once you manage to get access to the raw File object, you can use the Blob.slice method to load chunks of the file – there is a great example of reading file chunks via upload form.
You may also want to look at jDataView library & friends, which eases handling of binary data through the more efficient ArrayBuffer.
Edit: As for the synchronous API, localStorage (aka DOM Storage) could be considered too. It is also a key-value storage, but much much simpler and more limited than IndexedDB:
Size of the storage is limited, usually to 5 MB
Only one localStorage per domain/application (you can have multiple named object stores in IndexedDB).
Only strings can be stored.
In general, localStorage is useful cookies replacement, but it is not really useful for storing large offline data.
So to sum it up:
IndexedDB is the easiest and widely available option, though it may be slow, inefficient or hit memory limits with very large data; also, only asynchronous API is currenlty possible.
Raw file access is hard to obtain without user interaction and the APIs are unstable and non-standard.
In the end, you can combine both approaches, two options come in mind:
Use XHR2 to parse the large file in chunks and store the parsed nodes into IndexedDB
Store the large file into IndexedDB (via XHR), use FileHandle.getFile to load the File object and Blob.slice to read its content.
In all cases, you can (should) use Web Workers to handle data manipulation and calculations in the background.
Anyway, GraphHopper looks great, we really lack such non-trivial offline applications for Firefox OS, so good luck!
I have a simple offline html5/javascript single-html-file web application that I store in my dropbox. It's a sort of time tracking tool I wrote, and it saves the application data to local storage. Since its for my own use, I like the convenience of an offline app.
But I have several computers, and I've been trying to come up with any sort of hacky way to synchronize this app's data (which is currently using local storage) between my various machines.
It seems that chrome allows synchronization of data, but only for chrome extensions. I also thought I could perhaps have the web page automatically save/load its data from a file in a dropbox folder, but there doesn't appear to be a way to automatically sync with a specific file without user prompting.
I suppose the "obvious" solution is to put the page on a server and store the data in a database. But suppose I don't want a solution which requires me to maintain apps on a server - is there another way, however hacky, to cobble together synchronization?
I even looked for a while to see if there was a vendor offering a web database service - where I could, say, post/get a blob of json on demand, and then somehow have my offline app sync with this service, but the same-origin policy seems to invalidate that plan (and besides I couldn't find such a service).
Is there a tricky/sneaky solution to this problem using chrome, or google drive, or dropbox, or some other tool I'm not aware of? Or am I stuck setting up my own server?
I have been working on a Project that basically gives you versioned localStorage with support for conflict resolution if the same resource ends up being edited by two different clients. At this point there are no drivers for server or client (they are async in-memory at the moment for testing purposes) but there is a lot of code and abstraction to make writing your own drivers really easy... I was even thinking of doing a dropbox/google docs driver myself, except I want DynamoDB/MongoDB and Lawnchair done first.
The code is not dependent on jQuery or any other libraries and there's a pretty full features (though ugly) demo for it as are well.
Anyway the URL is https://github.com/forbesmyester/SyncIt
Apparently, I have exactly the same issue and invetigated it thoroghly. The best choice would be remoteStorage, if you could manage to make it work. It allows to use 3rd party server for data storage or run your own instance.