In cytoscape.js, is there a way to actually delete a node from memory completely? I wrote a layout and it consolidates nodes by making new ones and adding them to the graph, but if the layout is re-run, then I need to remove those nodes and recalculate, but there doesn't appear to be a way to allow these nodes to be garbage collected.
If there is a way to do this or is there another way I should approach this?
Thank you.
JS is a garbage collected language, so it suffices to drop all your own references to the node. If you remove a node from a graph and don't have any of your own references to it, then the garbage collector will clean it up sooner or later.
See also https://developer.mozilla.org/en-US/docs/Web/JavaScript/Memory_Management
Related
I have created a node with a sprite and when I use child.removeFromParent() on the node, although the node does disappear, I can still access to its contents such as the position of the sprite. I am worrying about what if I create many nodes and delete them immediately.
Would that cause a memory leak? Or how can I completely delete a node in Cocos2d-js?
I think cocos use an intern gc so maybe the life of an object lasts a bit after removing.
Also you can use retain/release to manually managing the object.
Retain when you create it and release when you delete it after a removeChild.
Try removeFromParentAndCleanup(cleanup) instead
I have a question about using Chrome's developer tools to debug memory leaks in a single-page web application.
According to Google's documentation, after taking a heap snapshot you'll see red and yellow detached DOM nodes. The yellow nodes are those still being reference by JavaScript, and effectively represent the cause of the leak. The red nodes are not directly referenced in JavaScript, but they're still "alive"—likely because they're part of a yellow node's DOM tree.
I've been able to fix several memory leaks by drilling down on all the yellow nodes in my heap snapshots and finding where in our code there was still a reference to them. However, now I've got a situation I'm not sure how to handle: Only red nodes are showing up in my heap snapshot!
If there is no JavaScript reference to these nodes, what are some other reasons that they wouldn't be garbage collected? Also, why does it say there are 155 entries but only show 60? I'm wondering if Chrome simply isn't showing one or more yellow nodes:
Per your request, adding this as an asnwer. Have you looked at more details on any of these DOM elements to see which DOM elements they are and perhaps that gives you a clue as to what code would have ever referenced them. One source of references that trips some people up is closures that you are done with, but are still alive for some reason.
I'm having trouble diagnosing a detached DOM tree memory leak in a very large single-page web app built primarily with Knockout.
I've tweaked the app to attach a dummy FooBar object to a particular HTML button element which should be garbage collected as the user moves to a different "page" of the app. Using Chrome's heap snapshot function, I can see that an old FooBar instance (which should have been GC'ed) is still reachable from its HTMLButtonElement in a (large) detached DOM tree.
Tracing the references via the retaining tree panel, I follow the chain taking decreasing distance from the GC root. However, at some point my search reaches a dead end at a node distance 4 from the root (in this case)! The retaining tree reports no references to this node at all, yet somehow knows it is four steps from the GC root.
Here is the part of the retaining tree which has me puzzled (the numbers on the right are distances from the root):
v foobar in HTMLButtonElement 10
v [4928] in Detached DOM tree / 5643 entries 9
v native in HTMLOptionElement 8
v [0] in Array 7
v mappedNodes 6
v [870] in Array 5
v itemsToProcess in system / Context 4
context in function itemMovedOrRetained()
context in function callCallback()
The retaining tree doesn't show the references here at distance 3 or above.
Can anyone explain this to me? I was hoping I'd be able to follow the reference chain back up to the offending part of the JavaScript app code -- but this has my stymied!
First of all - do not use delete as one of the comments suggested. Setting a reference to null is the right way to dispose of things. delete breaks the "hidden class". To see it yourself, run my examples from https://github.com/naugtur/js-memory-demo
Rafe, the content you see in profiler is often hard to understand. The bit you posted here does seem odd and might be a bug or a memory leak outside of your application (browsers leak too), but without running your app it's hard to tell. Your retaining tree ends in a context of a function and it can be retained by a reference to that function or some other function sharing the context. It might be too complicated for the profiler to visualize it correctly.
I can help you pinpoint the problem though.
First, go to Timeline tab in devtools and use it to observe the moment your leak happens. Select only memory allocation and start recording. Go through a scenario that you expect to leak. The bars that remain blue are the leaks. You can select their surrounding in the timeline and focus on their retaining tree. The most interesting elements in detached dom trees are the red ones - they're referenced from the outside. The rest is retained because whatever element in a tree is referenced, it has references to everything else (x.parentNode)
If you need more details, you can take multiple snapshots in the profiler, so that you have a snapshot before and after the cause of the leak (that you found with the timeline - you now know the exact action that causes it). You can then compare those in the profiler - there's a "compare" view. which is more comprehensible than others.
You can also save your heap snapshots from the profiler and post them online, so we could take a look. There's a save link on each of them in the list to the left.
Profiling memory is hard and actually requires some practice and understanding of the tools.
You can practice on some examples from my talk:
http://naugtur.pl/pres/mem.html#/5/2
but the real complete guide to using memory profiler is this doc:
https://developer.chrome.com/devtools/docs/javascript-memory-profiling#looking_up_color_coding
Updated link: https://developers.google.com/web/tools/profile-performance/memory-problems/memory-diagnosis
I have an Ext.tree.TreePanel used with AsyncTreeNodes. The problem is that initially the
root node needs to have more than 1000 descendants. I succeeded at optimizing the DB performance, but the JavaScript performance is terrible - 25 seconds for adding and rendering 1200 nodes. I understand that manipulating the page's DOM is a slow operation, but perhaps there is some way to optimize the initial rendering process.
You could create a custom tree node UI that has a lower DOM footprint. In other words, change the HTML that is used to create each node of the tree to some less verbose (and likely less flexible) HTML.
here are some references for doing that:
http://github.com/jjulian/ext-extensions/blob/master/IndentedTreeNodeUI.js
http://david-burger.blogspot.com/2008/09/ext-js-custom-treenodeui.html
Enjoy.
I don't think you'll have much luck optimizing a tree with that many nodes. Is there any way you could use a grid to deliver the information? You could at least set up paging with that, and it would probably be a lot faster. You could also implement the row expander UX on the grid, which behaves like a tree, sort of, for each row.
I'm trying to optimize a sortable table I've written. The bottleneck is in the dom manipulation. I'm currently creating new table rows and inserting them every time I sort the table. I'm wondering if I might be able to speed things up by simple rearranging the rows, not recreating the nodes. For this to make a significant difference, dom node rearranging would have to be a lot snappier than node creating. Is this the case?
thanks,
-Morgan
I don't know whether creating or manipulating is faster, but I do know that it'll be faster if you manipulate the entire table when it's not on the page and then place it on all at once. Along those lines, it'll probably be slower to re-arrange the existing rows in place unless the whole table is removed from the DOM first.
This page suggests that it'd be fastest to clone the current table, manipulate it as you wish, then replace the table on the DOM.
I'm drawing this table about twice as quickly now, using innerHTML, building the entire contents as a string, rather than inserting nodes one-by-by.
You may find this page handy for some benchmarks:
http://www.quirksmode.org/dom/innerhtml.html
I was looking for an answer to this and decided to set up a quick benchmark http://jsfiddle.net/wheresrhys/2g6Dn/6/
It uses jQuery, so is not a pure benchmark, and it's probably skewed in other ways too. But the result it gives is that moving DOM nodes is about twice as fast as creating and detroying dom nodes every time
if you can, it is better to do the dom manipulation not as actual dom manipulation, but as some sort of method within your script and then manipulating the dom. So rather than doing what is called a repaint on every single node, you clump what would have been your repaint on every single node into its own method, and then attach those nodes into a parent that would then be attached to the actual dom, resulting in just two repaints instead of hundreds. I say two b/c you need to cleanup what is in the dom already before updating with your new data.