Resource impact to give many HTML elements an id? - javascript

I'm writing an HTML + JavaScript application that has quite strict resource limitations: it will run in the browser for ages (can be many days or more; think of kiosk mode) and should also run without any change on mobile devices. It is also only one HTML page, i.e. DOM, that uses scrolling etc. to show different content.
=> I really have to make sure not to waste any ressources (CPU, RAM)
Now I'm creating hooks that an "external" editor for such an application / page could use, to have a WYSIWYG preview when modifying the content. Here I need to address elements on the page - an element is a div that will contain further DOM elements, but it is the smallest addressable unit for the editor. (We can probably assume 100 to 1000 of those elements in this long running page)
Now I could find the relevant element given by a "path" by an
algorithm at runtime (not elegant, but lookup time is ok in an
interactive environment).
Or I could add an HTML id attribute to
the elements which contains each individual path. (This would make my program more clear and a lookup very fast)
But I don't know the resource impact of giving so many elements an id attribute...
How much RAM would it need? Only the strings and a couple of pointers each?
Or would it create lots of new and heavy internal structures in the browser?

Having additional ID attributes on your elements would have very minimal impact on any resource use.
The main effect would be that it could increase file size ever so slightly depending on how many and how long IDs you were to use.

Related

Preferred way of swapping big trees (e.g. pages in SPAs): `display: none`, `replaceChild()`, etc

When developing complex web applications, like SPAs, we are faced with the problem of displaying/enabling/keeping updated different parts of the application at different times.
For instance, as a simple example, consider swapping the current page/section/activity of the application in a SPA.
Assuming that these trees are non-trivial (they are entire pages, they may contain CSS animations, event handlers, some non-static content, etc.) and that we care about performance, what is, nowadays, the preferred way of swapping them in and out? What approach modern frameworks choose? Why?
In particular, several metrics are important to consider:
Overall performance/memory usage/etc. (i.e. when not swapping pages)
Actual time required to switch pages
Note that in order to be as quick as possible, we want to keep the content/pages pre-loaded somehow (i.e. no network requests are discussed here).
For context, here are several ways people have used over the years:
Have some string templates in JS. When pages are switched, replace entirely the contents with innerHTML at once (for performance), reattaching handlers and start updating the non-static values as needed at that moment, etc.
Keeping everything in the DOM (i.e. all elements of all possible pages), but have inactive pages with display: none. Then, swap the current page simply switching display property in the page going out and the one going in. Also, do not update non-visible pages to avoid performance penalties.
Keeping the trees in JS (i.e. keep a reference to each top-most Node of each page in JS), and swap them in and out of the DOM with e.g. replaceChild().

Best practice to force the browser to only render user visible elements?

A particular page on our site loads with 1000s of divs each about 1000px x ~1500px(A printable page), each div displays additional elements/basic table/etc but can vary in height.
Render time can be several minutes depending on PC performance.
Using tools like webix which can load millions of rows proves the render process is taking up most of the loading time, but doesn't work well for non-tabular data.
Using Angular JS to create infinite scroll lists is possible. But this also doesn't work well with varying height elements.
All solutions I have found so far loose the browsers find feature, which our users commonly use, thus we will probably have to develop our own search tool.
Yes we could add pagination, or some sort of way of breaking down the data, but users still need to review all the data regardless of how it's broken down.
The same data (10,000 pages 30mb) once exported to PDF loads in < than 1 second.
I think the best solution will be the combination of a few different ideas.

How to make a page with a big cytoscape.js network more responsive

My institution is dealing with several fairly large networks:
around 1000 nodes and 2000 edges (so smaller than this one),
fixed node positions,
potentially dozens of attributes per element,
and a minified network file size of 1 MB and up.
We visualize the networks in custom-built HTML pages (one network per page) with cytoscape.js. The interface allows the user to select different pre-defined coloring options for the nodes (the main functionality) - colors are calculated from one of a set of numeric node attributes. Accessory functions include node qtips, finding a node and centering on it (via a select2 dropdown), and highlighting nodes based on their assignment to pre-defined groups (via another select2 drowpdown).
We face the problem that the page (and so necessarily the whole browser) becomes unresponsive for at least 5 seconds whenever a color change is initiated, and we are looking for strategies to remedy that. Here are some of the things that we have tried or are contemplating:
Transplant node attributes to separate files that are fetched on demand via XHRs (done, but performance impact unclear).
Offload cytoscape.js to a Web worker (produced errors - probably due to worker DOM restrictions - or did not improve performance).
Cache color hue calculation results with lodash's memoize() or similar (pending).
Export each colorized network as an image and put some fixed-position HTML elements (or a canvas?) on top of the image stack for node qtips and such. So we would bascially replace cytoscape.js with static images and custom JavaScript.
I appreciate any suggestions on alternative or complementary strategies for improving performance, as well as comments on our attempts so far. Thanks!
Changing the style for 1000 nodes and 2000 edges takes ~150ms the machine I'm using at this moment. It follows that the issue is very likely to be in your own code.
You haven't posted an example or a sample of what you've currently written, so it's not possible for me to say what the performance issue is in your code.
You've hinted that you're using widgets like <select>. I suspect you're reading from the DOM (e.g. widget state) for each element. DOM operations (even reads) are slow.
Whatever your performance issue is, you've decided to use custom functions for styling. The docs note this several times, but I'll reiterate here: Custom functions can be expensive. Using custom functions is like flying a plane without autopilot. You can do it, but now you have to pay attention to all the little details yourself. If you stick to the in-built stylesheets features, style application is handled for you quickly and automatically (like autopilot).
If you want to continue to use expensive custom functions, you'll have to use caching. Lodash makes this easy. Even if you forgo custom functions for setting .data() with mappers, you still may find caching with Lodash useful for your calculations.
You may find these materials useful:
Chrome JS profiling
Firefox performance profiling
Cytoscape.js performance hints

Rebuilding tables from the ground up or updating the content?

I'm building a browser based game in JavaScript.
It contains a lot of Information visualized via tables.
The game is turn-based, so whenever a turn is completed, I need to adjust a lot of innerHTML of those tables.
My question is:
Is it smarter to give IDs to all the <td> and update the innerHTML or is it smarter to wrap the tables inside a div, clear the div and rebuild all tables from scratch, then append them?
It depends on how long a view stays active, if the view is shared, how many cells change and how frequently.
If you have a high number users looking at different views/pages that stay active for a long time, then it might produce less load on your servers if you can make infrequent updates to individual cells.
If the changes happen less frequent and a high proportion of cells change, then it may be best to refresh the whole table. This would be 'less chatty' and use less network bandwidth overall.
However if you have a high number of users, all looking at the same view/page, you may wish to look into CQRS and caching your views or view data.
Rather replace the innerHTML, the code will look nicer and it will be a lot more effortless, because instead of recreating the whole thing you would just be replacing a string in an object, which is obviously a lighter task. So in most cases it makes sense to do that.
Consider using a framework or templates, though.

Javascript: Preserving event hooks and state by updating DOM with only changed nodes or minimum number of potentially changed nodes?

So for example if we have a flash video already buffered and playing, or an autocomplete active with input, focus, and a dropdown visible (and maybe even loading something) but only have HTML of the full document and a copy of the HTML earlier, how can we merge them into the live DOM without the user's state being interrupted?
Ordinarily I'd just program a specific fix, specifying the right area to change given it's div name, manually but that is not a known variable for the situation at hand (I'm programming a Pythonic MVC framework with AJAX).
I want to change the smallest amount of nodes, and probably the deepest nodes that I can get away with.
It's ok to require ID's for some of the nodes (e.g: flash or autocomplete widget) but not possible to expect this for all nodes - so in some situations perhaps node position and types will be available to compare documents.
I understand this will not be a complete solution but in some cases it will be all that is required.

Categories