Javascript closure and memory issues - javascript

//Following function add new table entry to table
//and return interface which has function which uses closure to access and update the table
var _newRow = (function(){
var _interface = {
updateName: null,
updateProgress: null,
actionLinkButton : null,//<a> tag used for user aciton to perform on UI like delete, hide, show etc.
..
..
..
};
var tr = createTr();
var tdName = createTd();
_inteface.updateName = function(newName){
tdName.innerHTML = newName;
}
..
..
..
..
..
return _interface;
})(tblObject);
//maintaining the array of all the rows as per row number
rowArray[rowNo] = _newRow;
..
..
//using the row array to update the entries
rowArray[rowNo].updateProgress('read');
Above is the pattern i have used to update the dynamically added rows on the client end. What i do is to while adding row to the table create _interface, return it and store it as per row number.
However for this i have used closure means many alive objects. I would like to know is this correct way ? Is there better approach that this ? What profiling tools i can use to know how much memeory this code is using ? How can i ensure that closures are cleared properly when they are not needed ?

JavaScript has a garbage collector which will collect stray objects and deallocate them for you automatically. There is no way to control when or how.
What may prevent objects/closures from being garbage collected is when you have globally accessible objects referencing functions with lexical scope. Make sure you detach all values you don't use (for instance, remove them from the DOM). If you're sure there is no way to access the values they will eventually be garbage collected.
To identify memory leaks it's important to not optimize prematurely, you can monitor the memory usage of your web browser. If it isn't noticeable there then you probably don't need to worry about it. You may want to simulate a lot of operations in your app, to see what the state might look like after the app has been running for a long time.
But generally, don't worry about memory usage. It is in my experience very rarely an issue.

Related

accessing and removing objects by ID

I have certain requirements , I wanted to do the following in quickest way as possible.
I have 1000's of objects like below
{id:1,value:"value1"} . . {id:1000,value:"value1000"}
I want to access above objects by id
I want to clean the objects Lesser than certain id every few minutes (Because it generates 1000's of objects every second for my high frequency algorithm)
I can clean easily by using this.
myArray = myArray.filter(function( obj ) {
return obj.id > cleanSize;
});
I can find the object by id using
myArray.find(x => x.id === '45');
Problem is here , I feel that find is little slower when there is larger sets of data.So I created some objects of object like below
const id = 22;
myArray["x" + id] = {};
myArray["x" + id] = { id: id, value:"test" };
so I can access my item by id easily by myArray[x22]; , but problem is i am not able find the way to remove older items by id.
someone guide me better way to achieve the three points I mentioned above using arrays or objects.
The trouble with your question is, you're asking for a way to finish an algorithm that is supposed to solve a problem of yours, but I think there's something fundamentally wrong with the problem to begin with :)
If you store a sizeable amount of data records, each associated with an ID, and allow your code to access them freely, then you cannot have another part of your code dump some of them to the bin out of the blue (say, from within some timer callback) just because they are becoming "too old". You must be sure nobody is still working on them (and will ever need to) before deleting any of them.
If you don't explicitly synchronize the creation and deletion of your records, you might end up with a code that happens to work (because your objects happen to be processed quickly enough never to be deleted too early), but will be likely to break anytime (if your processing time increases and your data becomes "too old" before being fully processed).
This is especially true in the context of a browser. Your code is supposed to run on any computer connected to the Internet, which could have dozens of reasons to be running 10 or 100 times slower than the machine you test your code on. So making assumptions about the processing time of thousands of records is asking for serious trouble.
Without further specification, it seems to me answering your question would be like helping you finish a gun that would only allow you to shoot yourself in the foot :)
All this being said, any JavaScript object inherently does exactly what you ask for, provided you're okay with using strings for IDs, since an object property name can also be used as an index in an associative array.
var associative_array = {}
var bob = { id:1456, name:"Bob" }
var ted = { id:2375, name:"Ted" }
// store some data with arbitrary ids
associative_array[bob.id] = bob
associative_array[ted.id] = ted
console.log(JSON.stringify(associative_array)) // Bob and Ted
// access data by id
var some_guy = associative_array[2375] // index will be converted to string anyway
console.log(JSON.stringify(some_guy)) // Ted
var some_other_guy = associative_array["1456"]
console.log(JSON.stringify(some_other_guy)) // Bob
var some_AWOL_guy = associative_array[9999]
console.log(JSON.stringify(some_AWOL_guy)) // undefined
// delete data by id
delete associative_array[bob.id] // so long, Bob
console.log(JSON.stringify(associative_array)) // only Ted left
Though I doubt speed will really be an issue, this mechanism is about as fast as you will ever get JavaScript to run, since the underlying data structure is a hash table, theoretically O(1).
Anything involving array methods like find() or filter() will run in at least O(n).
Besides, each invocation of filter() would waste memory and CPU recreating the array to no avail.

Best practice for caching/garbage collection with database record-backed constructors in JS

I am in a scenario that seems like it would be fairly commonplace, but I haven't seen much written on the topic to date as far as best practices go. I have a solution in place currently that is functional, but does not seem like the most efficient or optimal way of doing things.
In short, I have a database of several hundred thousand records and a nodejs process to interface with it. When a client or system process wants to work on or use information from a record, it uses a constructor to load it into a Javascript object pseudo-class that has prototype methods attached to it for getting, setting, mutating, and otherwise working with the data in the record. There is no clear time when a client process has "finished" working with a record, and it is possible that two clients or processes will want to work on the same record simultaneously. So I need to keep a cached copy of the record stored in a global object to avoid async race conditions with the database.
Problem is, I have to clear out this cache periodically. Currently, I have my own rudimentary garbage collector that sets a timeout of x number of minutes -- if no operations have been performed on that record object within that period, it deletes its own pointer from the global object to allow it to be picked up by GC. Next time a client needs to access it, it is loaded again from the DB and the timer begins anew.
While this is a currently functional process, I feel like as a general rule any time you have to employ timeouts and timers it is a clunky solution. But it seems like this is a common scenario many designers would find themselves facing. Is there an accepted best practice on how to create and destroy prototypical objects based on a large database set that can't fit entirely into memory at any given time?
Edit:
The current working method works basically as follows (simplified):
var records = {};
var Record = function(recordId) {
this.lastOpTimestamp = Date.now();
this.set = function(prop, value) {
....
this.lastOpTimestamp = Date.now();
}
this.get = function(prop, value) {
....
this.lastOpTimestamp = Date.now();
}
}
records[recordId] = new Record(recordId);
function gcSweep() {
var recordIds = Object.keys(records);
for (var i=0;i < recordIds.length;i++) {
if ((Date.now() - records[recordIds[i]].lastOpTimestamp) > 600000) {
delete records[recordIds[i]];
}
}
}
setInterval(gcSweep, 10000);

Global JQuery Selector caching to increase performance

I am trying to increase the jquery performance of my mobile html 5 app. I read some guides about storing used jquery selectors in global objects. The app is pretty big and I didn't expected a big perfomrance boost, but the app was running even slower (like 20%).
I only use jquery to find elements by Id ( $("#id") or $_element.find("#id")). Ids are unique, so I am only interested in finding the first element. I managed to globalize all jquery calls in a cacheHandler object, which stores all selectors by id. The cache is cleared frequently and contains around 30 elements per cycle.
With this changes the app ran slower, so I tried some more things to increase performance:
cache[id] = $(document.getElementById("id"))
cache[id] = document.getElementById("id")
store selectors with hashCode: cache[id.hashCode()]
I came up with the idea, that this solution is slow, because the memory is increased frequently, because the whole dom with all its children is stored in the cache.
So I had the new Idea, to cache the element path as array, like
document.body.children[1].children[5].children[0] => [1,5,0]
So I just have to find the element once, store the array and look up the path, if I need the element again.
This doesn't change anything, and all ideas were even slower than calling $("#id"), whenever I need the element.
If needed I can provide more informations or snippets.
I am thankfull for each explanation, why this is slowing down my app.
If it's a mobile html5 app why are you using jQuery for selectors? Seems very redundant.
I usually do somthing along the lines of this:
// helpers, since i hate typing document.get ..
function _id(e){ return document.getElementById(e); } // single id
function _all(e){ return document.querySelectorAll(e); } // single elem
function _get(e){ return document.querySelector(e); } // multiple elem
// loop helper (for changing or doing things to collection of elements)
function _for(e,f) { var i, len=e.length; for(i=0,l=len;i<l;i++){ f(e[i]); }}
// VARs (global)
var c = _id('c'), // main selector
box = c.querySelectorAll('.box'), // boxes in 'c'
elements = box.querySelectorAll('.element'); // elems in 'box'
// Change background-color for all .box using the _for -helper
_for(elements, function(e){ e.style.backgroundColor = 'red'; }
I only store main parents of elements so that i can then query down the DOM if needed (limiting the lockup needed for traversing). In the example variables above one could imagine that something in the .box would change several times OR that .box is a slow selector.
Do note that global variables increase memory usage though since those variables can interfer with garbage collection. Also note that objects can be slower in some browsers and if it doesn't bloat your code to much you should instead store it more plainly (you shouldn't store too many global variables anyway so....).
Edit, jsPerf:
http://jsperf.com/global-vs-local-vars-14mar2015
Do note however that depending on what you're selection and exactly what you're doing will have the greatest impact. In the jsPerf-example the difference between local and global quickly diminishes as soon as one starts selecting descendants from the globally cached selectors ie doing box.find('p').css('color','blue') etc.
This is quite old but we never know, someone can read this post.
jQuery is based on Sizzle which is way smaller: https://sizzlejs.com/
You can eventually include only this library. I would not recommend to maintain your own piece of code for this purpose. It is already done and maintained by someone else.

Is it faster to store data on a dom element with a class or jQuery data

I have a render loop that updates the status of a number of DOM elements on every iteration. The code must apply presentational classes to each element based on a value obtained from an external data source.
E.g.
var animationFrames = // Result of http get request
for (var i in animationFrames) {
var frame = animationFrames[i];
// This function reads data from the frame variable and updates
// the presentational layer
updateDomElements(frame);
}
var updateDomElements = function (frame) {
var rooms = frame.rooms;
for (var i in rooms) {
var room = rooms[i];
var roomEl = $(room.id);
if (!roomEl.hasClass(room.class)) {
roomEl.attr("class", room.class);
}
}
}
I want to know what the most efficient way of storing presentational data in the DOM is? I'm worried that reading the class string on every iteration will be too expensive.
Would it be better to use the jQuery data() API and then set the class? Is there a more performant method that I haven't thought of?
I can't find my reference, but from what I remember using $.data() is the fastest (faster than $(selector).data(). jQuery stores the data in memory and doesn't use the DOM.
Any interaction with the DOM will be slower than accessing the data value in memory.
Edit: In the comments from jQuery.data()... here is a jsPerf test of $.data() vs $(sel).data() as well as slide 51 of this presentation.
Edit 2: Here is another nice discussion here on SO. Especially read the comments by patrick dw!
here is a test for it to try it yourself. Looks like $.data is the best.
http://jsperf.com/dom-vs-jquery-data

Greasemonkey failing to GM_setValue()

I have a Greasemonkey script that uses a Javascript object to maintain some stored objects. It covers quite a large volume of information, but substantially less than it successfully stored and retrieved prior to encountering my problem. One value refuses to save, and I can not for the life of me determine why. The following problem code:
Works for other larger objects being maintained.
Is presently handling a smaller total amount of data than previously worked.
Is not colliding with any function or other object definitions.
Can (optionally) successfully save the problem storage key as "{}" during code startup.
this.save = function(table) {
var tables = this.tables;
if(table)
tables = [table];
for(i in tables) {
logger.log(this[tables[i]]);
logger.log(JSON.stringify(this[tables[i]]));
GM_setValue(tables[i] + "_" + this.user, JSON.stringify(this[tables[i]]));
logger.log(tables[i] + "_" + this.user + " updated");
logger.log(GM_getValue(tables[i] + "_" + this.user));
}
}
The problem is consistently reproducible and the logging statments produce the following output in Firebug:
Object { 54,10 = Object } // Expansion shows complete contents as expected, but there is one oddity--Firebug highlights the object keys in purple instead of the usual black for anonymous objects.
{"54,10":{"x":54,"y":10,"name":"Lucky Pheasant"}} // The correctly stringified JSON.
bookmarks_HonoredMule updated
undefined
I have tried altering the format of the object keys, to no effect. Further narrowing down the issue is that this particular value is successfully saved as an empty object ("{}") during code initialization, but skipping that also does not help. Reloading the page confirms that saving of the nonempty object truly failed.
Any idea what could cause this behavior? I've thoroughly explored the possibility of hitting size constraints, but it doesn't appear that can be the problem--as previously mentioned, I've already reduced storage usage. Other larger objects save still, and the total number of objects, which was not high already, has further been reduced by an amount greater than the quantity of data I'm attempting to store here.
It turns out the issue was that of this.save() being called from an unsafeWindow context. This is a security violation, but one that should have resulted in an access violation exception being thrown:
Error: Greasemonkey access violation: unsafeWindow cannot call GM_getValue.
Instead GM_setValue returns having done nothing, and the subsequent logging instructions also execute, so there was no hint of the issue and the documentation may be out of date.
In my quest to solve this problem by any means, I abstracted away GM_ storage functions so I could use other storage mechanisms, so the workaround will be to put all save instructions in a pre-existing cleanup routine that runs in setInterval, similar to the fix described in the aforementioned documentation. (The use of an existing interval is to prevent excessive creation of timers which have in the past degraded performance over browser uptime.)

Categories