I need to find out if Google Chrome has a limit on Javascript execution that may slow down some scripts. I'm sorry in advance i cannot post any HTML or examples, but i will try to explain the problem as thorough as possible.
We have a page with a very complex structure (tables within divs within tables at least 20 levels deep) and there we have the core of the page split into 2 parts: on one side is a list of categories(1000 divs or so), and on the other are attributes that need to be mapped to them(10 or so). The 1000 categories each contain 10 tags within them( 4 span, 1 ul's and 5 divs) can also load their subcategories, increasing the number even more.
now, the main problem is that the attributes need to be dragged to the categories in order to execute the mapping, but then you start to drag, it sometimes takes more than 10 seconds for the dragged element to appear, and up to a minute when you drop it (the actual ajax executed in under half a second).
On Firefox the slowness is not such an issue (the script is still slow, but it executes 10 times faster). Is Chrome limiting the script execution resources? If so, can you give me any ideas on how to circumvent this from happening?
I wouldn't have thought Chrome would be limiting resources, it would be good if you tried the app on different operating systems with the stable, beta and dev versions of Chrome, just to see what the results are like across the board.
It's a shame you cannot post example code, a complex HTML structure linked with complex selectors might be the reason behind the slowness, is there no way you can show any HTML + JavaScript, perhaps with no private data in.
If not perhaps just try to simplify the markup and selectors, cannot think of much else, hands are tied without code.
Related
I'm making my portfolio website here,
and I'm wondering if I should replace my LONG HTML5 code that populates my skills/projects/project modals into javascript that runs in a for loop.
I know it won't matter much because it is not like thousands of list elements, but if I take asymptotic approach, would it make difference at all?
I read this thread: Simple html vs Javascript generated html? , but it was still vague to me.
Thank you in advance.
EDIT
Someone voted that this post is unclear about what it's asking. So let me rephrase.
Assume that I'm populating almost infinite number of <li> elements, will HTML5 tags (traditional way) load the page faster, or forloop in a JavaScript load the page faster? Another assumption is that page will be loaded at some point.
Thanks again.
You say an almost infinite number of items. I say you're grossly exaggerating.
Did you ever wonder why Google only shows you the first 10 results? How many times do you look at page 2 of your Google searches? The 3rd page? Ever even seen the 4th?
What's your usual conclusion when you have to go to the 2nd page? Mine is that my query sucks and I try to narrow it down.
There's no way a user is expected to peruse "an almost infinite amount of items" and get any useful information from it. Use a search engine, let people narrow their search (way) down, use paging. And after that's done, use HTML. There's no reason to add another two layers of work (outputting the Javascript code to render + sending the JS data) to just generate the same output in the end.
One of the pages served by my application is very long (about 8Mb of source HTML, mostly tables). I know this itself is a sign of wrong architecture but there are circumstances that don't allow to change that quickly :(
In almost every browser apart from IE the page is still fine - of course it's slower than an average one but it looks like the time it takes to display is mostly defined by code download speed, and I have no problems with that.
It is very different in IE (7, 8 and 9) though - the page is extremely slow for about 10-15 seconds after downloading with frozen screen effect, then experiences noticeable scrolling lags and "script is running slowly" messages while there is no javascript running on the page. IE9 takes about 800Mb of RAM as well when displaying that page.
When I serve plain text content of that size it is much better, but formatted HTML tables seem to be causing problems. It looks like long DOM is a blocker for IE of any version.
I don't know what answer am I hoping for - obviously a proper solution would be to change the page architecture by breaking it down on server side and serving piece by piece via ajax, but still - is there any kind of say magic pragma or js for IE to stop doing what it does with DOM tree speeding it up?
It would be the best solution to chuck page downloading by client. But you have to be advised that the "table" tag is the most slowest rendering tag in IE (as my experience says). So in the first step I think you should do some modifications on the HTML document. Here are some suggestions:
Clear inline style sheets and use css classes as far as you can. It will help your HTML document to be smaller in size.
use some other expressions instead of using TABLE. using DIV s would be my first recommendation. Simplify your document and the parser can read the codes as easy as you can. So make them easy to read. It causes to write less and again, it helps the document to be much smaller.
remove all the spaces, tabs, new line characters and many other extra contents from the HTML document.
Qualify the content you are presenting to be more useful for the client. As we all now, we can see two lines at most. So all the data on one page is not a good idea, actually useless. Because while user is downloading the document, some data might be updated on the server and the data your user has is not valid anymore.
After all, always remember that every character stores 8 bytes in the memory (no matter it's virtual or not) including all the parsing variables and memory uses by xml parsers and some how hard codes to load the HTML string and make a DOM out of it. The speed to read the document and parsing it is so important as much as the size would be.
Hope it helps you..
Cheers
I am building a one page webapp and it's starting to get pretty big. There are several components to the app, each one meticulously styled. On average the app has a DOM element count of 1200+. I have been warned by my YSlow scan that this is too many, and that I should have no more than 700 DOM elements.
I am usually quite strict and efficient with my markup and I doubt I would be able to trim much off. I tend to use a lot of DOM elements to get the styling exactly right and working cross browser.
How can I dramatically cut the number of DOM elements?
Will I have to load more of the content on demand (ajax) instead on all on page load?
Does a large amount of DOM elements have a big impact on performance?
I would love to hear people's experience with this and any solutions you may have...
The number of dom elements would only enter into the picture if you're doing a lot of DOM and/or CSS manipulation on the page via Javascript. Scanning for an ID in a page with 50,000 elements is always going to be slower than a page with only 500. Changing a CSS style which is inherited by most of the page will most likely lead to more redrawing/reflowing than it would on a simpler page, etc...
The only way to cut element count is to simplify the page.
We've built a single page web app. Initially Yslow worried me as we had 2,000+ DOM objects in the page.
After some work we got all the other Yslow items to green. And we ended up living with it(around 1,800 right now) as the app is very fast in various browsers.
But we don't support IE6 and IE7, and it could be different for these browsers.
How can I dramatically cut the number of DOM elements?
By using only those elements that are necessary. If you want an more elaborate advice, post your code.
Will I have to load more of the content on demand (ajax) instead on all on page load?
If you want your page to perform better on start-up, you can do that.
Does a large amount of DOM elements have a big impact on performance?
Not necessarily.
You can render elements on demand when user click a button or can use lazy loading like Twitter.
I'm building a site that makes extensive use of FLIR to allow the use of non-websafe fonts. However, pageloads are an ugly process, as first the HTML text version of each field loads and then (a few hundred milliseconds later) it's replaced by its FLIR image counterpart.
Is there any way to avoid this sort of thing? I've got a client presentation in a few hours and I know it'll raise eyebrows. My situation is sort of related to this question which is in regards to sIFR, not FLIR. Any ideas?
Thanks,
Justin
Try putting the following rules into your stylesheet:
.flir-replaced{text-indent:-5000px;}
.flir-image{display:block;}
You may have to modify your other FLIR-related CSS rules to account for the fact that the generated images are now vertically aligned to the top of their respective parents.
It's been a while since I used FLIR, but I recall there was an internal caching method that would pull from cache on load instead of generate it each time.
http://docs.facelift.mawhorter.net/configuration:settings
Also, you can't have too many on the page at once. I found that between 6-10 were optimal for performance.
Are you on shared hosting? Is your css/js compressed? I found that the initial load was a little slow, but fairly quick after the images had been generated.
A project I'm currently working currently has about 10 ULs, each which will have anywhere from 10-50 elements in them. Its been proposed that each of those elements have a unique ID specified to it that we will use to update content with via Javascript.
This seems like a large number of IDs to add to a page, but each field will have a real and meaningful name.
If this is useful to us, are will adding IDs to this many already existing elements have any effects on performance while initially rendering the page or traversing/modifying it with javascript?
In my personal experience I've implemented pages with over 1000 unique IDs and even IE seem to cope quite well. However, please remember that IE will create a global variable for each ID on the page and remember that in javascript, commonly both global variables and function names are merely attributes of the window object.
So in IE the following code will break:
<div id="foo"></div>
<script>
function foo (txt) {
document.getElementById('foo').innerHTML = txt; // fail in IE
// because function 'foo'
// clash with ID 'foo'
}
</script>
Just something to keep in mind because with such a large number of ID's chances of function names clashing increases.
I took Eddie Parker's advice. Further, I was interested in the difference between short ID's (<10 characters) vs long ID's (>50 characters).
My test used FF2.0 to open a page with n DIV tags, each with an ID, containing only the text "Content":
5000 Short ID's: 1.022s to load from localhost and render
5000 Long ID's: 1.065s to load from localhost and render
50,000 Short ID's: 6.702s to load from localhost and render
50,000 Long ID's: 6.792s to load from localhost and render
Hope that gives you a ballpark.
Edit:
I was using the YSlow extension to perform the timing.
I can't answer authoritatively, but why don't you just write a script to generate yourself a ridiculously large ul list, and test rendering performance with/without id's? Then you can test it across a multitude of browsers while you're at it. Then post the answer up here and you can answer your own question and earn a shiny badge. ;)
It shouldn't take very long to implement a python script to output that.
Adding all the id attributes of course means that the page source gets longer, which might affect the load time somewhat. Other than that, the effect would be minimal. Just having the elements in the page clearly causes a lot more work than adding an id to them.