I have a chrome extension that modifies the DOM based on keywords. The problem is, for websites like twitter that have an infinite scroll, I need a way for my function to keep firing as the user scrolls through the page.
Is .livequery() the only way to do this or is there a better way?
Right now all of the logic is plain JavaScript/Jquery, but I'm open to using a framework like Angular if that's the best way to do it.
I have several functions that interact -
1) a hide() function that adds a class to divs containing words I want hidden
2) a walk() function that walks the DOM and identifies divs to call hide() on
3) walkWithFilter() function that gets words to filter from localstorage and calls walk() function
The last function walkWithFilter() is called in a window.onload() event
It seems like the onScroll event would be a natural match for this. The trick would be that you'd need to keep track of what's already been processed to avoid reprocessing old content. If you're assuming that the user is always exposing new content below the existing content, that could be as simple as keeping a pointer to the last processed item and restarting the walkWithFilter method from there. That doesn't seem like an entirely safe assumption to me, though.
If you want to be more robust in that regard, you could try a virtual DOM approach: you maintain a copy of the DOM as you last saw it, compare it to the DOM as it currently exists, and take a diff. I know there are a bunch of premade libraries for this kind of thing, but I haven't used any and can't recommend a specific one (the link just goes to the first example that showed up in Google). It also doesn't appear to be overly burdensome to roll your own, if you're so inclined.
Related
Is there a way to get notified, after inserting an element into the DOM with insertBefore(), when this element becomes actually visible/available to user ? Especially to start applying CSS transforms on it ?
Complete problem
Forgive me if this question is recurrent, I didn't find a suitable answer so far. I'm trying to implement a custom popup dialog system on a website of my own, similar to SweetAlert or some other products.
I would like to apply some special effects when this popup shows up, such as a progressive darkening of the background, as well as a slow vertical motion on the box itself.
To achieve all of this, I spawn one big, fixed div element covering the whole screen (the background) and containing the popup box. When I need it, I first insert this element as body's first child, tagging it with a special invisible class. Once inserted, I remove this invisible class from the element and let the CSS rules do the magic.
The problem is that even if removed the class after having inserted this element, this one will be rendered only when the Javascript function leaves, hence directly in its final state.
When doing this on a complete initial page, the load event helps. I now would like to do the same on an existing page.
As always, I'm interested on both solutions to this (potentially XY) problem: if there's a better way to do it, I'll be happy to discover it, but I'm still interested in solving this particular situation anyway.
Thanks in advance to everyone.
EDIT: currently performing tests on Firefox 82.0.2
Thanks to comments above, here's a valid solution to both exposed problems:
"Mutation Observer", as well as former "Mutation Events" (now deprecated) are the best way to get notified when something is inserted. It won't help with animations issues, though, because it's still not guaranteed to be rendered yet at this time ;
Rather than applying a class then another to perform a transition, it's better to define a regular animation using #keyframes that plays only once. It's guaranteed to be played when the object appears, by definition.
Many thanks to "Pomax", F4st3r and epascarello for their help.
I have 2 JS variables. before and after. They contains the SAME html document, but have some modification. About 1%-10% change between them. I want to update the body from before to after. The variablesbefore and after are raw string.
I can do something like that:
document.documentElement.innerHTML=after
The problem is that if I render this way it not look good. The render takes time, and there is a white screen between the renders. I want to show the user 10 modification in a second (video of modifications)
So what I want to do. I want to search and find only the elements that changed only by analyze the HTML text of before and after.
My way of solution:
I can find the changes and the position in the text using Javascript Library for diff & match & patch.
The question is:
After I find the text changes. How to find only the elements who changed. I update only those elements.
I thought, maybe to create a range, that contains every change, and update the range, but how exactly to do that?
If anything unclear, please comment, I will explain better.
I found a very good library for it: https://github.com/patrick-steele-idem/morphdom
Lightweight module for morphing an existing DOM node tree to match a
target DOM node tree. It's fast and works with the real DOM—no virtual
DOM here!
Very easy to use, and doing exactly what I need
If I have understood your question correctly, then what I would have done is,
1) Make a new object (view Object) which will control the rendering of DOM elements. (Similar to MVC)
2) In this object, I would have created 3 functions.
a) init function (contains the event-handlers)
b) render1 function (which will contain elements in before element)
c) render2 function (which will contain elements in after element)
Whenever there is an event where I need to change the HTML of a class/id/body/document, I will change that in init function and call render2 function which contains the after element.
This should not give any error, however the browser has to work to render all the page, but rendering can be divided over multiple elements of document. So, whenever you need to render a part of document, make separate render functions.
p.s. there can be different approaches.
You must implement the LCS(Longest Common Subsequence). To understand better of this algorithm you can watch this youtube video. Also It's easier to first study Longest Common Substring.
I think I have a solution. virtual-dom can do the work for me. I can create two VTree, make a diff, and apply a patch.
From the documentation of virtual-dom:
virtual-dom is what I need.
Manual DOM manipulation is messy and keeping track of the previous DOM
state is hard. A solution to this problem is to write your code as if
you were recreating the entire DOM whenever state changes. Of course,
if you actually recreated the entire DOM every time your application
state changed, your app would be very slow and your input fields would
lose focus.
virtual-dom is a collection of modules designed to provide a
declarative way of representing the DOM for your app. So instead of
updating the DOM when your application state changes, you simply
create a virtual tree or VTree, which looks like the DOM state that
you want. virtual-dom will then figure out how to make the DOM look
like this efficiently without recreating all of the DOM nodes.
virtual-dom allows you to update a view whenever state changes by
creating a full VTree of the view and then patching the DOM
efficiently to look exactly as you described it. This results in
keeping manual DOM manipulation and previous state tracking out of
your application code, promoting clean and maintainable rendering
logic for web applications.
https://github.com/Matt-Esch/virtual-dom
I've been learning Javascript online and have recently started to put it to use on a website. I'm still a little confused about what might be the most efficient/performant ways to do things, though. A few questions with that in mind...
If a webpage has multiple click events in different sections, each doing something different, one event listener in the body tag — with multiple if/else statements — would be the most efficient way to handle them all. Is that correct?
Is the search method a good way to handle if/else statements in this case? For example:
if (event.target.className.search("js-tab") !== -1) {
// do something
} else if (event.target.className.search("js-dropdown") !== -1) {
// do something else
}
The ID performance only applies to actually finding an element, right? There wouldn't be a difference between event.target.className.search and event.target.id.search, would there? (assuming there aren't an insane amount of class names to search through on that element). I'm currently using className for sections that have the same functionality (multiple tabbed sections on the same page, for example). I suppose I could just as easily use the or operator if it made a difference, like so:
if ((event.target.id.search("js-tab-one") !== -1) || (event.target.id.search("js-tab-two") !== -1))
When there are multiple elements that a click event could potentially be on (an icon inside of an anchor link, for example), how much of a performance hit is it to add additional if/else statements (i.e. check the tag type, and if it's not the a tag, move up)? I recently refactored my css so that my icons were set to width: 100% and height: 100% (ensuring that the click was always happening on the same element every time), but I wonder how much of a performance boost (if any at all) I actually got by doing this?
If a webpage has multiple click events in different sections, each
doing something different, one event listener in the body tag — with
multiple if/else statements — would be the most efficient way to
handle them all. Is that correct?
No. It's useful for handling events on elements that you add dynamically, but not so much for regular events.
Handling all the events in the body means that you will handle all the events. Every click that happens goes through your code to see if it comes from one of the elements that you are interested in.
Is the search method a good way to handle if/else statements in this
case?
No. The search method uses a regular expression, not a string, so the string will be parsed to create a RegExp object. Then a search is done using the regular expression, which is slower than a normal string search, and can give unexpected results.
To look for a string in another string you should use the indexOf method instead. You should however be aware that it will look for a match anywhere in the string, not matching whole class names. It will for example find "all-head" when the element has the class "small-header".
The ID performance only applies to actually finding an element, right?
Yes.
When there are multiple elements that a click event could potentially
be on (an icon inside of an anchor link, for example), how much of a
performance hit is it to add additional if/else statements (i.e. check
the tag type, and if it's not the a tag, move up)?
That means that every click event will have to go through the extra if statements.
If you bind the event to the element instead, you know that the event happened inside that element and you don't have to know which element did actually catch it.
A lot of what you're asking is subjective though and it fits in the context that is trying to happen. So there might be a reason why the below might be not good (as another commenter said, don't prematurely optimize). But there are some programming techniques that can be said about what you have done, namely modularity. There's a few performance considerations but they mostly revolve around writing good code:
1) it really depends on your handler, if you intend to delete and add DOM elements, this can be better because you watch the document instead of tying in a new click handler to each element. but this can be bad because its not modular if you have it do too much stuff. But if its just looking at the state, then this is not really good performance wise....
2) i wouldn't say that the search has any impact on the if else's, but in considering optimization of this particular situation you describe later on, I wouldn't do that. As each time you go through each if-else you end up doing another search and it can add up in the long run if you have a lot of if-elses in the way.
3) there shouldn't be a difference but why search in an id string? unless your id string is part of some unholy long part like musical-js-tab-one-musical-tab which I wouldn't suggest to use as an id like this anyway, you should split it up, you shouldn't tie functionality like that, use classes instead like class='musical' id='js-tab-one' since id's should describe a singular object on the page.
4) everytime you add an if-else, no matter how small it is, is another computer cycle and a little bit more memory. and it can add up, just attach your click handler by tag and by class at this point and let the browser decide to optimize it.
I'm developing a single page application that uses a lot of widgets (mainly grids and tabs) from the jqWidgets library that are all loaded upon page load. It's getting quite large and I've started to notice after using (I emphasize using because it doesn't start to lag after simply being open for any amount of time, but specifically, after opening and closing a bunch of tabs on my page, each tab containing multiple grids loaded thru Ajax that have multiple event listeners tied to each) the site for a couple minutes the UI becomes quite slow and sometimes non-responsive, when the page is refreshed everything works smooth again for a few minutes then back to laggy. I'm still testing on localhost. My initial reaction was that the DOM has too many elements (each grid creates hundreds of divs! And I have a lot of them) so event listeners which are tied to IDs have to search through too many elements and become slow. If this is the case it won't be too hard to fix, is my assumption likely to be the culprit or do I have worse things to fear?
UPDATE: here are captures of the memory time line and heap snapshot. On the memory timeline there was no interaction with the site, the two large increases are page refreshes, the middle saw tooth section is just letting my site idle.
Without seeing any code examples it doesn't sound too bad.
If you have a LOT of jQuery selectors try and make those specific as possible. Especially if you're selecting a lot of items a lot of the time.
For example, if you have a bunch of class "abc", try and specify before that where to look - e.g. are they only found within table cells? are they only found within paragraph tags? The more specific you make your selector the better as if you specify the selector like this:
$('.class')
Then it will search the entire DOM for anything that matches .class, however, if you specify it as follows: $('p .class') then it will only search all paragraph tags for the class.
Other performance killers are wiring up events and then never removing them. If you have any code that removes elements that have event handlers attached to them then best practice is to remove the event handlers when the element is removed. Otherwise you will start piling up orphaned events.
If you are doing a large single page application look to a library like backbone (http://backbonejs.org/) or angular (http://angularjs.org/) to see if this can help you - they alleviate a lot of these issues that people who use plain jQuery will run in to.
Finally, this post (http://coding.smashingmagazine.com/2012/11/05/writing-fast-memory-efficient-javascript/) is seriously good at outlining out you can write fast, efficient javascript and how to avoid the common performance pitfalls.
Hope this helps.
It does sound like you have a memory leak somewhere. Are you using recursion that's not properly controlled or do you have loops that could be ended early, but you fail to break out of them when you find something you're looking for before the loop naturally ends. Are you using something like this:
document.getElementById(POS.CurrentTableName + '-Menus').getElementsByTagName('td');
where the nodelist returned is huge and you only end up using a tiny bit of it. Those calls are expensive.
It could be your choice of architecture also. Hundreds of divs per grid doesn't sound manageable logically by a human brain. Do you address each div specifically by id or are they just an artifact of the lib you're using and are cluttering up the DOM? Have you checked the DOM itself as you're using it to see if you're adding elements in the hinterland by mistake and cluttering up the DOM with junk you don't use causing the DOM to grow continuously as you use the app. Are you adding the event handlers to the elements numerous times instead of just once?
For comparison, I too have a single page app (Google-Chrome App - Multi currency Restaurant Point of Sale) with anywhere from 1,500 to 20,000 event handlers registered making calls to a sqlite back end on a node.js server. I used mostly pure JS and all but 50 lines of the HTML is written in JS. I tie all the event handlers directly to the lowest level element responsible for the event. Some elements have multiple handlers (click, change, keydown, blur, etc).
The app operates at eye blink speed and stays that fast no matter how long its up. The DOM is fairly large and I regularly destroy and recreate huge portions of it (a restaurant table is cleared and recreated for the next sitting) including adding up to 1,500 event handlers per table. Hitting the CLEAR button and it refreshing the screen with the new table is almost imperceptible, admittedly on a high end processor. My development environment is Fedora 19 Linux.
Without being able to see your code, its a little difficult to say exactly.
If the UI takes a little bit before it starts getting laggy, then it sounds likely that you have a memory leak somewhere in your JavaScript. This happens quickly when using a lot of closures as well as nested function and variable references without cleaning them up when your done with them.
Also, event binding to many elements can be a huge drain on browser resources. If possible, try to use event delegation to lower the amount of elements listening to events. For example:
$('table').on('click','td', myEventHandler);
Be careful to make sure that event bindings only occur once as to avoid actions being unintentionally fired many times.
Good luck!
As many developers will be I'm producing web based application that are using AJAX to retrieve data and HTML.
I'm new to web development and javascript but have a couple of decades experience in programming in other languages.
I'm using mootools, which is a great framework, but have been battleing with the lack of destructors in javascript or even onDestroys/ unloads for the dom elements.
I've written a number of UI classes ( mostly to learn ) and alot of them use setInterval timers to periodically get data from the WebServer and update elements on the page (mostly images from cameras).
Most issue occur when another page is requested with the menu and the content div is reloaded with new HTML and Javascript ( using Request.HTML ). This simple replaces all the elements already in the div with the new one and runs the new scripts. Any timers in the old scripts or old objects created will continue to run. This was leaving me with lots of orphaned Clases, elements and timers.
I've been reading more on the mootools site and have realized a number of mistakes I've been making and have started to correct alot of the issues. The biggest of which was not using Element.store and Element.retrieve instead of linking my classes directly to the Elements.
I've already found that the contents of the div being reloaded need to be freed by calling destroy on all its child elements before calling the Request.HTML but that will not remove (clear) any timers that are running.
So I've done a JSFiddle here deinitialize classes to show what i've been trying, its appears to work fine but the following and what i want to know is,
Is it a good idea?
are there any other issues I might have missed?
can you see any problem with this type of implementation ?
or am I reinventing the wheel and missed
something?
Explanation
When the class is initialized it stores itself with the element.
It also appendes (makes if necessary) itself into an AssocClasses array also stored with the element.
I've created a ClearElement function that is called whenever the contents of an element are to be replace with and AJAX call or other method, which gets all elements within the div and if they have and AssocClasses array attached, calls the deinitialize on each of the Classes in the array, then it calls destroy on each of its direct children to free the elements/storage.
Any information, pointers etc would be most greatfully recieved.
Most issue occur when another page is requested with the menu and the content div is reloaded with new HTML and Javascript ( using Request.HTML ). This simple replaces all the elements already in the div with the new one and runs the new scripts. Any timers in the old scripts or old objects created will continue to run. This was leaving me with lots of orphaned Clases, elements and timers.
I would rethink your timer storage and use of evalScripts in your ajax calls.
Keep these outside of your AJAX requests. When doing peer code reviews rarely have I seen an instance where these were needed and could be done in a better way.
Maybe on the link that is clicked have it trigger a callback function on Complete or onSuccess
Without seeing your exact code it will be hard to advise further.