Do javascript variables have a storage limit? - javascript

Do javascript variables have a storage capacity limit?
I'm designing one YUI datatable where I fetch the data from database and store it in a js object and wherever required I'll extract it and update the YUI datatable. Right now in Dev I've very few records and its storing correctly. In production I may have 1000s of records, this js object is capable to store all these 1000s of records?
If its not capable I'll create on hidden textarea in jsp and store the data there

Yes, objects and arrays have storage limits. They are sufficiently large to be, for most purposes, theoretical. You will be more limited by the VM than the language.
In your specific case (sending thousands of items to a client), you will run into the same problem whether it is JSON, JavaScript, or plain text on the JSP page: client memory. The client is far more likely to run out of usable system memory than you are to run into a language restriction. For thousands of small objects, this shouldn't be an issue.
Arrays have a limit of 4.2 billion items, shown in the spec at 15.4.2.2, for example. This is caused by the length being a 32-bit counter. Assuming each element is a single integer, that allows you to store 16GB of numeric data in a single array.
The semantics on objects are more complex, but most functions to work with objects end up using arrays, so you're limited to 4.2 billion keys in most practical scenarios. Again, that's over 16GB of data, not counting the overhead to keep references.
The VM, and probably the garbage collector, will start to hang for long periods of time well before you get near the limits of the language. Some implementations will have smaller limits, especially older ones or interpreters. Since the JS spec does not specify minimum limits in most cases, those may be implementation-defined and could be much lower (this question on the maximum number of arguments discusses that).
With a good optimizing VM that tries to track the structures you use, at that size, will cause enough overhead that the VM will probably fall back to using maps for your objects (it's theoretically possible to define a struct representing that much data, but not terribly practical). Maps have a small amount of overhead and lookup times get longer as size increases, so you will see performance implications: just not at any reasonable object size.
If you run into another limit, I suspect it will be 65k elements (2^16), as discussed in this answer. Finding an implementation that supports less than 65k elements seems unlikely, as most browsers were written after 32 bit architectures became the norm.

There isn't such a limit.
It looks that there is a limit at 16GB, but you can read some tests below or in #ssube's answer.
But probably when your object/json is around 50 mb you'll encounter strange behaviour.
For Json here is an interesting article : http://josh.zeigler.us/technology/web-development/how-big-is-too-big-for-json/
For Js Object you have more knowledge here: javascript object max size limit (saying that there isn't such a limit but encounter strange behaviour at ~40 mb)

The limit depends on the available memory of the browser. So every PC, Mac, Mobile setup will give you a different limit.
I don't know how much memory one of your records needs, but I would guess that 1000 records should work on the most machines.
But: You should avoid storing massive data amounts in simple variables, depending on the records memory it slows down the whole website behavior. Your users with average computers may see ugly scrolling effects, delayed hover effects and so on..
I would recommend you to use local storage. I'm sorry to don't know the YUI library, but I am pretty sure that you can point to the storage for your datatable source.

There is a limit on JavaScript objects max js object limit. What i would suggest using is session objects because that's what it sounds like your trying to do anyway.

Related

What is the best practice to quickly access big amount of data in the web browser?

I have this huge array of strings, saved in a JSON file on a remote server (the file size is 2MB and increasing)..
In my front-end code, I need to constantly loop through this array..
I'm wondering what is the best practice to have the quickest access to it?
My approach right now is as follows:
Fetch the JSON file from the remote server once.
Assign this data to a JavaScript variable.
Save this data to an IndexedDb database, to minimize dealing with the network and save traffic.
When the page reloads, fetch the data from the IndexedDb database and re-assign it to the variable again, then go from there.
So, whenever I need to do the looping in the front-end, I access the variable and loop from there, however, I'm not sure, but, this doesn't sound like a good idea to me. I mean, a variable with 2MB of size?! (it could be 10MB or more at the future), hence, I'm worried if it's using too much memory or it's badly affecting the performance of the web page.
My question is, is there a better way to do this? JavaScript/browser-wise.
And while we're at it, what do I need to know about the best practices for handling big amounts of data in the browser/JavaScript world?
Don't worry about memory.
Your 10MB of JSON Strings will be transformed to binary representations. Your booleans (true/false) will take 1 bit or maybe 1 byte (depending on the implementation) instead of 4bytes for the strings. Your numbers will take aboit 8 bytes instead of one byte per char. Depending on the implementation, keys will be hashed and/or reused. So the parsed memory footprint will be a lot smaller. Also if the data is static, the js engine will optimize for it. Maybe you can freeze the data to make sure the engine knows the data is immutable.
I wouldn't go as far and store it in a database. You add a lot of overhead.
Use the Web Storage API / LocalStorage instead.
Depending on the Contexte the CacheStorage might be interesting (although I have no experience with it).
As for everything you do performance-wise. Don't optimize too early. Make a performance analysis to find the bottlenecks.

How much data should we cache in memory in single page applications?

I was curious to know if there is any limit for data caching in
Single page applications using shared service or ngrx.
Does caching too much data on front end impacts the overall
performance of web Application (DOM).
Lets say I have a very big complex nested object which I am caching in memory
Now assume that I want to use different subsets of object in different modules/components of our
application and for that I may need to do lot of mapping operations(using loops by matching the id's etc) on UI.
I was thinking in other way around that instead of doing so much operations on UI to extract the
relevant data why don't I use a simple API with having id parameter to fetch the relevant information if its not taking much time to get the data from backend.
url = some/url/{id}
So is it worth to cache more complex nested objects if we cant use its subset simply by its properties
obj[prop] and need to do lot of calculations on UI (looping etc) which actually is more time consuming than getting the data from rest API ?
Any help/explanation will be appreciated !!!
Thanks
Caching too much data in memory is not a good idea. It will affect your application performance. Causes performance degradation in a system having less memory.
theoretically cache memory is for keeping less amount of data. The maximum support size is 2GB. I think chrome is also supported up to that limit.
For keeping data of big size in client-side never use memory cache instead you should use client-side database/datastore. It uses disk space instead of memory.
There are number of web technologies that store data on the client-side like
Indexed Database
Web SQL
LocalStorage
Cookies
Depending upon the client application framework it can be decided.
By default browser uses 10% of disk space for these data stores. We also have option to increase that size.

Is there a way/function to set a limit on how much processing power a script can use in javascript?

What I am trying to do is set a limit on how much processing power my script can use in javascript. For instance, after 2 MB it starts to lag similar to how NES games would lag when too much action is being displayed on the screen.
I have tried doing research, but nothing is giving me the answer I'm looking for. Everything is about improving performance as opposed to reducing the amount that can be used.
Since it seems you are trying to limit memory usage you might be able to track whatever it is that is using memory and simply limit that for example if you're creating a lot of objects you could reuse created objects once you created a certain number wich would limit used space.
As an asside I would suggest you check out service workers they might present an alternate way of solving your issue.

Reserve space in IndexedDB

I'm writing a diagnostics application that writes data to IndexedDB every three seconds. The size of the data is fixed and data older than one week is automatically deleted, resulting in max 201600 entries. This makes calculating the IndexedDB space requirements rather easy and precise. I would like to reserve space in IndexedDB when the site is launched, as the end user is likely to leave the browser unattended for large periods of time.
The simple solution seems to be to store a very large object that will prompt the user to accept space requirements. This requires storing then deleting a very large object, which takes a lot of time and processing power. This would also require checking whether this test has already taken place to ensure that it is only run once.
Is there a better solution?
I don't think there is a good solution to this. Even your solution doesn't really work. Just because you were once able to store a certain amount of data doesn't mean you will always be able to store that amount of data. The quota isn't constant. For instance, in Chrome it's roughly 10% of free hard drive space, which clearly can change over time.
If possible, the best solution would be to make your app gracefully handle the case where the quota is exceeded. Because it will happen no matter how you design your app.

nodeJS, mongoDB, express, eJS - opinions on memory caching

I'm building my first site using this framework, i'm remaking a website i had done in PHP+mySQL and wish to know something about performance... In my website, i have two kinds of content:
Blog Posts (for 2 sections of the site) - these have the tendency to one day sum to thousands of records and, are more often updated
Static (sort of) data: this is information i keep in the database, like site section's data (title, metatags, header image url, fixed html content, javascript and css filenames to include in that section), that is rarely updated and it's very small in size.
While i was learning the basics on nodeJS, i started thinking of a way to improve the performance of the website, in a way i couldn't do with PHP. So, what i'm doing is:
When i run the app, the static content is all loaded into memory, i have a "model" Object for each content that stores the data in an array, has a method to refresh that data, ie, when the administrator updates something, i call refresh() to go get the new data from the database to that array. In this way, for every page load, instead of querying the database, the app queries the object in memory directly.
What i would like to know is if there should be any increase of performance, working with objects directly in memory or if constant queries to the database would work just as good or even better.
Any documentation supporting your answer will be much appreciated.
Thanks
In terms of the general database performance, MongoDB will keep your working set in memory - that's its basic method of operation.
So, as long as there is no memory contention to cause the data to get swapped out, and it is not too large to fit into your physical RAM, then the queries to the database should be extremely fast (in the sub millisecond range once you have your data set paged in initially).
Of course, if the database is on a different host then you have network latency to think about and such, but theoretically you can treat them as the same until you have a reason to question it.
I don't think there will be any performance difference. First thing is that this static data is probably not so big (up to 100 records?) and querying DB for it is not a big deal. Second thing (more important) is that most DB engines (including mongoDB) have caching systems built-in (although I'm not sure how they work in details). Third thing is that holding query results in memory does not scale well (for big websites) unless you use storage engine like Redis. And that's my opinion, although I'm not the expert.

Categories