How do I stop my webpage cleanly in Firefox? - javascript

I have created an IIS MVC webpage.
Users find that if they leave it open overnight it is in some "frozen" state in the morning and it has also frozen any other tabs that might be open in the brower.
Therefore, they have to kill the whole browser window and log into my webpage again.
How can I cleanly shutdown(or put into nice state) my webpage at 10PM?
I have tried the following, which works on Chrome, but not Firefox:
setTimeout(function () { quitBox('quit') }, millisTill10);
function quitBox(cmd) {
if (cmd == 'quit') {
open(location, '_self').close();
window.close();
}
return false;
}
I am happy to leave the tab there - but put it into some kind of clean, dead state, that would not interfere with the other tabs.
I have tried to catch the error to fix it - but I have no idea what is causing it to freeze. The code below does NOT catch it:
window.onerror = function(error, url, line) {
alert('Inform please ERR:'+error+' URL:'+url+' L:'+line);
};
Fuller version:
window.onerror = function (errorMsg, url, lineNumber, column, errorObj) {
var stackTrace = "Not available";
try {
stackTrace = errorObj.prototype.stack
} catch (e) {
try {
stackTrace = errorObj.stack
} catch (e) {
try {
stackTrace = errorObj.error.stack
} catch (e) {
}
}
}
alert('Please inform of Error: ' + errorMsg + ' Script: ' + url + ' Line: ' + lineNumber
+ ' Column: ' + column + ' StackTrace: ' + errorObj + ' ST: ' + stackTrace);
}

Debugging a Browser Death
Rather than looking at how to "kill" the tab, it might be worth looking at why the application is dying in the first place. Firefox is a single-process browser (currently), but it has a lot of safety checks in place to keep the process running, which means that there's a pretty short list of things that can actually "kill" it.
First off, let's cross off some things that can't kill it: Plugins like Java and Flash. These run in a separate process already (if they're running at all):
So at most, if they're runaways, they'll kill themselves but the rest of the browser will still be running.
Second, you're not seeing memory warnings. Firefox is pretty good about displaying an error dialog when JavaScript consumes too much memory, so if you're not seeing that, odds are really good it's not an out-of-memory issue.
The Most Likely Causes
What that leaves is a fairly short list of possibilities:
Browser bug (unlikely, but we'll list it anyway)
Browser add-on/extension bug
Infinite loop in JavaScript
Infinite recursion in JavaScript
Not-quite-infinite-but-close-enough loop/recursion in JavaScript
Now let's cross those off.
A browser bug is unlikely, but possible. But, as a general rule when debugging, assume it's your code that's broken, not the third-party framework/tool/library around you. There are a thousand really sharp Mozilla devs working daily to kill any bugs in it, which means that anything you see failing probably has your code as a root cause.
A browser extension/add-on bug is possible, but if you're seeing this on everybody's computers, odds are good that they all have different configurations and it's not an issue with an extension/add-on. Still, it's probably worth testing your site in a fresh Firefox install and letting it sit overnight; if it isn't broken in the morning, then you have a problem with an extension/add-on.
A true infinite loop or infinite recursion is also pretty unlikely, based on your description: That would likely be readily detectable at a much earlier stage after the page has been loaded; I'd expect that at some point, the page would just suddenly go from fully responsive to fully dead — and, more importantly, the browser has an "Unresponsive script" dialog box that it would show you if it was stuck in a tight infinite loop. The fact that you're not seeing an "Unresponsive script" dialog means that the browser is still processing events, but likely processing them so slowly or rarely that it might as well be completely frozen.
Not Quite Infinite Death
More often than not, this is the root cause of "dead page" problems: You have some JavaScript that works well for a little bit of data, and takes forever with a lot of data.
For example, you might have code on the page tracking the page's behavior and inserting messages about it into an array, like this, so that the newest messages are at the top: logs.unshift(message) That code works fine if there are relatively few messages, and grinds to a halt when you get a few hundred thousand.
Given that your page is dying in the middle of the night, I'd wager dollars to donuts that you have something related to regular tracking or logging, or maybe a regular Ajax call, and when it kicks off, it performs some action that has overall O(n^2) or O(n^3) behavior — it only gets slow enough to be noticeable when there's a lot of data in it.
You can also get similar behavior by accidentally forcing reflows in the DOM. For example, we had a chunk of JavaScript some years ago that created a simple bulleted list in the UI. After it inserted each item, it would measure the height of the item to figure out where to put the next one. It worked fine for ten items — and died with a hundred, because "measure the height" really means to the browser, "Since there was a new item, we have to reflow the document to figure out all the element sizes before we can return the height." When inserting, say, the 3rd item, the browser only has to recompute the layout of the two before it. But when inserting the 1000th item, the browser has to recompute the layout of all 999 before it — not a cheap operation!
I would recommend you search through your code for things like this, because something like it is probably your root cause.
Finding the Bug
So how do you find it, especially in a large JavaScript codebase? There are three basic techniques:
Try another browser.
Divide and conquer.
Historical comparison.
Try Another Browser
Sometimes, using another browser will produce different behavior. Chrome or IE/Edge might abort the JavaScript instead of dying outright, and you might get an error message or a JavaScript console message instead of a dead browser. If you haven't at least tried letting Chrome or IE/Edge sit on the page overnight, you're potentially ignoring valuable debugging messages. (Even if your production users will never use Chrome or IE/Edge, it's at least worth testing the page in them to see if you get different output that could help you find the bug.)
Divide-and-Conquer
Let's say you still don't know what the cause is, even bringing other browsers into the picture. If that's the case, then I'd tackle it with the approach of "divide and conquer":
Remove half of the JavaScript from the page. (Find a half you can remove, and then get rid of it.)
Load the page, and wait for it to die.
Analyze the result:
If the page dies, you know the problem is still in the remaining half of the code, and not in the half you removed.
If the page doesn't die, you know the problem is in the half you removed, so put that code back, and then remove the good half of the code so you're left with only the buggy code in the page.
Repeat steps 1-3, cutting the remaining JavaScript in half each time, until you've isolated the bug.
Since it takes a long time for your page to die, this may make for a long debugging exercise. But the divide-and-conquer technique will find the bug, and it will find it faster than you think: Even if you have a million lines of JavaScript (and I'll bet you have far less), and you have to wait overnight after cutting it in half each time, it will still take you only twenty days to find the exact line of code with the bug. (The base-2 logarithm of 1,000,000 is approximately 20.)
Historical Analysis
One more useful technique: If you have version control (CVS, SVN, Git, Mercurial, etc.) for your source code, you may want to consider testing an older, historical copy of the code to see if it has the same bug. (Did it fail a year ago? Six months ago? Two years ago?) If you can eventually rewind time to before the bug was added, you can see what the actual change was that caused it, and you may not have to hunt through the code arbitrarily in search of it.
Conclusion
In short, while you can possibly put a band-aid on the page to make it fail gracefully — and that might be a reasonable short-term fix while you're searching for the actual cause — there's still very likely a lot more you can do to find the bug and fix it for real.
I've never seen a bug I couldn't eventually find and fix, and you shouldn't give up either.
Addendum:
I suppose for the sake of completeness in my answer, the simple code below could be a suitable "kill the page" band-aid for the short term. This just blanks out the page if a user leaves it sitting for eight hours:
<script type='text/javascript'><!--
setTimeout(function() {
document.location = 'about:blank';
}, 1000 * 60 * 60 * 8); // Maximum of 8 hours, counted in milliseconds
--></script>
Compatibility: This works on pretty much every browser with JavaScript, and should work all the way back to early versions of IE and Netscape from the '90s.
But if you use this code at all, don't leave it in there very long. It's not a good idea. You should find — and fix! — the bug instead.

If the tab was opened by JavaScript, then you may close it with JavaScript.
If the tab was NOT opened by JavaScript, then you may NOT close it with JavaScript.
You CAN configure FireFox to allow tabs to be closed by JavaScript by navigating to about:config in the URL bar and setting dom.allow_scripts_to_close_windows to true. However, this will have to be configured on a machine-by-machine basis (and opens up the possibility of other websites closing the tab via JavaScript).
So:
Open the tab via JavaScript so that it can then be closed by JavaScript
or
Change a FireFox setting so that JavaScript can close a tab.
PS. I'd also recommend taking a look at https://developers.google.com/web/tools/chrome-devtools/memory-problems/ to try to help identify memory leaks (if the application has a memory leak) or having the web app ping the server every minute for logging purposes (if the application is breaking at an unknown time at the same time every night).

As was mentioned in other comments you don't need to find how to close, you need how to avoid "freezing".
I suggest:
Collect statistic what browser/browsers fall
Collect metrics from browser that fail you can use window.performance and send logs to the server in timeout (I haven't tried, timeout can freeze before you have logs you need)

Related

Violation Long running JavaScript task took xx ms

Recently, I got this kind of warning, and this is my first time getting it:
[Violation] Long running JavaScript task took 234ms
[Violation] Forced reflow while executing JavaScript took 45ms
I'm working on a group project and I have no idea where this is coming from. This never happened before. Suddenly, it appeared when someone else got involved in the project. How do I find what file/function causes this warning? I've been looking for the answer, but mostly about the solution on how to solve it. I can't solve it if I can't even find the source of the problem.
In this case, the warning appears only on Chrome. I tried to use Edge, but I didn't get any similar warnings, and I haven't tested it on Firefox yet.
I even get the error from jquery.min.js:
[Violation] Handler took 231ms of runtime (50ms allowed) jquery.min.js:2
Update: Chrome 58+ hid these and other debug messages by default. To display them click the arrow next to 'Info' and select 'Verbose'.
Chrome 57 turned on 'hide violations' by default. To turn them back on you need to enable filters and uncheck the 'hide violations' box.
suddenly it appears when someone else involved in the project
I think it's more likely you updated to Chrome 56. This warning is a wonderful new feature, in my opinion, please only turn it off if you're desperate and your assessor will take marks away from you. The underlying problems are there in the other browsers but the browsers just aren't telling you there's a problem. The Chromium ticket is here but there isn't really any interesting discussion on it.
These messages are warnings instead of errors because it's not really going to cause major problems. It may cause frames to get dropped or otherwise cause a less smooth experience.
They're worth investigating and fixing to improve the quality of your application however. The way to do this is by paying attention to what circumstances the messages appear, and doing performance testing to narrow down where the issue is occurring. The simplest way to start performance testing is to insert some code like this:
function someMethodIThinkMightBeSlow() {
const startTime = performance.now();
// Do the normal stuff for this function
const duration = performance.now() - startTime;
console.log(`someMethodIThinkMightBeSlow took ${duration}ms`);
}
If you want to get more advanced, you could also use Chrome's profiler, or make use of a benchmarking library like this one.
Once you've found some code that's taking a long time (50ms is Chrome's threshold), you have a couple of options:
Cut out some/all of that task that may be unnecessary
Figure out how to do the same task faster
Divide the code into multiple asynchronous steps
(1) and (2) may be difficult or impossible, but it's sometimes really easy and should be your first attempts. If needed, it should always be possible to do (3). To do this you will use something like:
setTimeout(functionToRunVerySoonButNotNow);
or
// This one is not available natively in IE, but there are polyfills available.
Promise.resolve().then(functionToRunVerySoonButNotNow);
You can read more about the asynchronous nature of JavaScript here.
These are just warnings as everyone mentioned. However, if you're keen on resolving these (which you should), then you need to identify what is causing the warning first. There's no one reason due to which you can get force reflow warning.
Someone has created a list for some possible options. You can follow the discussion for more information.
Here's the gist of the possible reasons:
What forces layout / reflow
All of the below properties or methods, when requested/called in
JavaScript, will trigger the browser to synchronously calculate the
style and layout*. This is also called reflow or layout
thrashing,
and is common performance bottleneck.
Element
Box metrics
elem.offsetLeft, elem.offsetTop, elem.offsetWidth, elem.offsetHeight, elem.offsetParent
elem.clientLeft, elem.clientTop, elem.clientWidth, elem.clientHeight
elem.getClientRects(), elem.getBoundingClientRect()
Scroll stuff
elem.scrollBy(), elem.scrollTo()
elem.scrollIntoView(), elem.scrollIntoViewIfNeeded()
elem.scrollWidth, elem.scrollHeight
elem.scrollLeft, elem.scrollTop also, setting them
Focus
elem.focus() can trigger a double forced layout (source)
Also…
elem.computedRole, elem.computedName
elem.innerText (source)
getComputedStyle
window.getComputedStyle() will typically force style recalc
(source)
window.getComputedStyle() will force layout, as well, if any of the
following is true:
The element is in a shadow tree
There are media queries (viewport-related ones). Specifically, one of the following:
(source) * min-width, min-height, max-width, max-height, width, height * aspect-ratio, min-aspect-ratio, max-aspect-ratio
device-pixel-ratio, resolution, orientation
The property requested is one of the following: (source)
height, width * top, right, bottom, left * margin [-top, -right, -bottom, -left, or shorthand] only if the
margin is fixed. * padding [-top, -right, -bottom, -left,
or shorthand] only if the padding is fixed. * transform,
transform-origin, perspective-origin * translate, rotate,
scale * webkit-filter, backdrop-filter * motion-path,
motion-offset, motion-rotation * x, y, rx, ry
window
window.scrollX, window.scrollY
window.innerHeight, window.innerWidth
window.getMatchedCSSRules() only forces style
Forms
inputElem.focus()
inputElem.select(), textareaElem.select() (source)
Mouse events
mouseEvt.layerX, mouseEvt.layerY, mouseEvt.offsetX, mouseEvt.offsetY
(source)
document
doc.scrollingElement only forces style
Range
range.getClientRects(), range.getBoundingClientRect()
SVG
Quite a lot; haven't made an exhaustive list , but Tony Gentilcore's 2011 Layout Triggering
List
pointed to a few.
contenteditable
Lots & lots of stuff, …including copying an image to clipboard (source)
Check more here.
Also, here's Chromium source code from the original issue and a discussion about a performance API for the warnings.
Edit: There's also an article on how to minimize layout reflow on PageSpeed Insight by Google. It explains what browser reflow is:
Reflow is the name of the web browser process for re-calculating the
positions and geometries of elements in the document, for the purpose
of re-rendering part or all of the document. Because reflow is a
user-blocking operation in the browser, it is useful for developers to
understand how to improve reflow time and also to understand the
effects of various document properties (DOM depth, CSS rule
efficiency, different types of style changes) on reflow time.
Sometimes reflowing a single element in the document may require
reflowing its parent elements and also any elements which follow it.
In addition, it explains how to minimize it:
Reduce unnecessary DOM depth. Changes at one level in the DOM tree
can cause changes at every level of the tree - all the way up to the
root, and all the way down into the children of the modified node.
This leads to more time being spent performing reflow.
Minimize CSS rules, and remove unused CSS rules.
If you make complex rendering changes such as animations, do so out of the flow. Use position-absolute or position-fixed to accomplish
this.
Avoid unnecessary complex CSS selectors - descendant selectors in
particular - which require more CPU power to do selector matching.
A couple of ideas:
Remove half of your code (maybe via commenting it out).
Is the problem still there? Great, you've narrowed down the possibilities! Repeat.
Is the problem not there? Ok, look at the half you commented out!
Are you using any version control system (eg, Git)? If so, git checkout some of your more recent commits. When was the problem introduced? Look at the commit to see exactly what code changed when the problem first arrived.
I found the root of this message in my code, which searched and hid or showed nodes (offline). This was my code:
search.addEventListener('keyup', function() {
for (const node of nodes)
if (node.innerText.toLowerCase().includes(this.value.toLowerCase()))
node.classList.remove('hidden');
else
node.classList.add('hidden');
});
The performance tab (profiler) shows the event taking about 60 ms:
Now:
search.addEventListener('keyup', function() {
const nodesToHide = [];
const nodesToShow = [];
for (const node of nodes)
if (node.innerText.toLowerCase().includes(this.value.toLowerCase()))
nodesToShow.push(node);
else
nodesToHide.push(node);
nodesToHide.forEach(node => node.classList.add('hidden'));
nodesToShow.forEach(node => node.classList.remove('hidden'));
});
The performance tab (profiler) now shows the event taking about 1 ms:
And I feel that the search works faster now (229 nodes).
In order to identify the source of the problem, run your application, and record it in Chrome's Performance tab.
There you can check various functions that took a long time to run. In my case, the one that correlated with warnings in console was from a file which was loaded by the AdBlock extension, but this could be something else in your case.
Check these files and try to identify if this is some extension's code or yours. (If it is yours, then you have found the source of your problem.)
Look in the Chrome console under the Network tab and find the scripts which take the longest to load.
In my case there were a set of Angular add on scripts that I had included but not yet used in the app :
<script src="//cdnjs.cloudflare.com/ajax/libs/angular-ui-router/0.2.8/angular-ui-router.min.js"></script>
<script src="//cdnjs.cloudflare.com/ajax/libs/angular-ui-utils/0.1.1/angular-ui-utils.min.js"></script>
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.3.9/angular-animate.min.js"></script>
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.3.9/angular-aria.min.js"></script>
These were the only JavaScript files that took longer to load than the time that the "Long Running Task" error specified.
All of these files run on my other websites with no errors generated but I was getting this "Long Running Task" error on a new web app that barely had any functionality. The error stopped immediately upon removing.
My best guess is that these Angular add ons were looking recursively into increasingly deep sections of the DOM for their start tags - finding none, they had to traverse the entire DOM before exiting, which took longer than Chrome expects - thus the warning.
I found a solution in Apache Cordova source code.
They implement like this:
var resolvedPromise = typeof Promise == 'undefined' ? null : Promise.resolve();
var nextTick = resolvedPromise ? function(fn) { resolvedPromise.then(fn); } : function(fn) { setTimeout(fn); };
Simple implementation, but smart way.
Over the Android 4.4, use Promise.
For older browsers, use setTimeout()
Usage:
nextTick(function() {
// your code
});
After inserting this trick code, all warning messages are gone.
Adding my insights here as this thread was the "go to" stackoverflow question on the topic.
My problem was in a Material-UI app (early stages)
placement of custom Theme provider was the cause
when I did some calculations forcing rendering of the page
(one component, "display results", depends on what is set in others, "input sections").
Everything was fine until I updated the "state" that forces the "results component" to rerender. The main issue here was that I had a material-ui theme (https://material-ui.com/customization/theming/#a-note-on-performance) in the same renderer (App.js / return.. ) as the "results component", SummaryAppBarPure
Solution was to lift the ThemeProvider one level up (Index.js), and wrapping the App component here, thus not forcing the ThemeProvider to recalculate and draw / layout / reflow.
before
in App.js:
return (
<>
<MyThemeProvider>
<Container className={classes.appMaxWidth}>
<SummaryAppBarPure
//...
in index.js
ReactDOM.render(
<React.StrictMode>
<App />
//...
after
in App.js:
return (
<>
{/* move theme to index. made reflow problem go away */}
{/* <MyThemeProvider> */}
<Container className={classes.appMaxWidth}>
<SummaryAppBarPure
//...
in index.js
ReactDOM.render(
<React.StrictMode>
<MyThemeProvider>
<App />
//...
This was added in the Chrome 56 beta, even though it isn't on this changelog from the Chromium Blog: Chrome 56 Beta: “Not Secure” warning, Web Bluetooth, and CSS position: sticky
You can hide this in the filter bar of the console with the Hide violations checkbox.
This is violation error from Google Chrome that shows when the Verbose logging level is enabled.
Example of error message:
Explanation:
Reflow is the name of the web browser process for re-calculating the positions and geometries of elements in the document, for the purpose of re-rendering part or all of the document. Because reflow is a user-blocking operation in the browser, it is useful for developers to understand how to improve reflow time and also to understand the effects of various document properties (DOM depth, CSS rule efficiency, different types of style changes) on reflow time. Sometimes reflowing a single element in the document may require reflowing its parent elements and also any elements which follow it.
Original article: Minimizing browser reflow by Lindsey Simon, UX Developer, posted on developers.google.com.
And this is the link Google Chrome gives you in the Performance profiler, on the layout profiles (the mauve regions), for more info on the warning.
If you're using Chrome Canary (or Beta), just check the 'Hide Violations' option.
For what it’s worth, here are my 2¢ when I encountered the
[Violation] Forced reflow while executing JavaScript took <N>ms
warning. The page in question is generated from user content, so I don’t really have much influence over the size of the DOM. In my case, the problem is a table of two columns with potentially hundreds, even thousands of rows. (No on-demand row loading implemented yet, sorry!)
Using jQuery, on keydown the page selects a set of rows and toggles their visibility. I noticed that using toggle() on that set triggers the warning more readily than using hide() & show() explicitly.
For more details on this particular performance scenario, see also this article.
The answer is that it's a feature in newer Chrome browsers where it alerts you if the web page causes excessive browser reflows while executing JS. Please refer to
Forced reflow often happens when you have a function called multiple times before the end of execution.
For example, you may have the problem on a smartphone, but not on a classic browser.
I suggest using a setTimeout to solve the problem.
This isn't very important, but I repeat, the problem arises when you call a function several times, and not when the function takes more than 50 ms. I think you are mistaken in your answers.
Turn off 1-by-1 calls and reload the code to see if it still produces the error.
If a second script causes the error, use a setTimeOut based on the duration of the violation.
This is not an error just simple a message. To execute this message change
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> (example)
to
<!DOCTYPE html>(the Firefox source expect this)
The message was shown in Google Chrome 74 and Opera 60 . After changing it was clear, 0 verbose.
A solution approach

How to trace slow JS or JQuery code

I created a web page for viewing images. This page has some other code that gets included that I did not write. The page loads 40 small images upon load. Then the user will scroll down and additional pages of 40 images can be loaded via ajax. Once I get to 15-20 pages, I notice the page begins to slow significantly. I check app counters and it can go up to 100% cpu and memory can go over 3GB. Then I will inevitably get the modal that JQuery is taking too long to execute, asking me if I want to stop executing the script. Now I realize that a page with up to 800 images is a big load, but the issue with JQuery suggests to me that some code may also be iterating over this larger and larger group of dom objects. It almost appears to get exponentially slower as I pass 15 pages or so. Once I get to 20 pages it becomes almost unusable.
First of all, is it even possible to run a page efficiently, even with minimal JS, when you have this many images? Secondly, is there a recommended way to "trace" JS and see what kinds of functions are getting executed to help determine what is the most likely culprit? This is most important to me - is there a good way to do in Firebug?
Thanks :)
EDIT - I found my answer. I had some older code which was being used to replace images that failed to load with a generic image. This code was using Jquery's .each operator and thus was iterating over the entire page and each new ajax addition every time the page loaded. I am going to set a class for the images that need to be checked in CSS so that the ajax-loaded images are unaffected.
Firebug, and all the other debugging tools let you profile your functions. You can see how long they take to run and how many times they have been called.
http://getfirebug.com/javascript
See: Profile JavaScript performance
Another useful tool to look into is the profile() function
console.log('Starting Profile');
console.profile();
SuspectFunction();
console.profileEnd();
Though the console window in the debugger you can see the profile results.
The best tool I have used is https://developers.google.com/web-toolkit/speedtracer/ for Chrome
To answer your first question, 15 pages of images should not be a problem for a computer to handle. Google loads up to 46 pages of images without lagging at all. Although it does stop you from loading more after that.
The answer your second question, there are many ways to track JS code. Since you are doing a performance related debugged, I'd go with timestamped console log:
console.log(" message " + new Date());
I'd put one in the beginning and end of function you are interested in measuring the performance of, and read through the log to see how long it takes to execute each of those functions. You would compare the timestamp to see what excess code is executing and how long it took for the code to execute.
Finally, in Firebug, go to the console tab, and click on Pofile before you start you start scrolling down the page. Then scroll to your 15th page, and then click Profile again. It breaks down function called and amount of time it took.
I prefer to use the timer function in Firebug or Chrome, it can be called like this:
console.time('someFunction timer');
function someFunction(){ ... }
console.timeEnd('someFunction timer');
This isn't as robust as the profiler functions, but it should give you an idea of how long functions are taking.
Also if you are running at 100% CPU and 3GB of memory, you almost certainly have a memory leak. You may want to consider removing some the initial images, when more pages are loaded in. For example, after 5 pages being shown, you remove the first page when the user views the 6th page.
I was able to fix the problem by going over my code again. I was loading new images via ajax but I had an older line of code that was checking all images, ie $('img') to replace any images that failed to load with a generic image. This means that as I continually load new images, this selector has to iterate over the entire growing dom again and again. I altered that code and now the page is flying! Thanks everyone for the help.

Using WebInspector's Javascript Debugger, how do I break out of an infinite loop?

I wrote an infinite loop in my javascript code. Using WebKit's Web Inspector, how do I terminate execution? I have to quit Safari and reopen it again (after changing my code of course).
EDIT: To be more specific, I'm looking for a way to enter the looping executing process/thread to see why the loop isn't terminating. An analogy would be in GDB where I could do a ^C and break into the process. I'm not looking for a way to kill my web browser. I'm pretty good at that already.
I'm not familiar with WebKit, but this sounds like a common problem that I usually debug as follows: Declare an integer outside of the scope of the loop, and increment it for each iteration, then throw an exception when the iteration count exceeds the maximum expected possible amount of iterations. So, in pseudo-code, something like the following could be used to debug this problem:
var iterations = 0;
var greatestPossibleNumberOfValidIterations = 500;
while(shouldLoopBooleanTest){
iterations++;
if(iterations>greatestPossibleNumberOfValidIterations){
//do debugging/error handling
}
}
I don't expect that this is specific enough to warrant an accepted answer, but I hope it helps you solve your problem.
Have you tried top to look at the process tree? Find the PID of the program and then type kill -9 [PID].
Have you tried using F11? This is the key for step into: https://trac.webkit.org/wiki/WebInspector
In using Firefox (I know the question doesn't specify Firefox, but I'm just throwing this in, in case it helps somebody), Safari, and Chrome, I found that there isn't a consistent way to break into an infinite loop, but there are ways to setup execution to break into a loop if you need to. See my test code below.
Firefox
Utter trash. It just stood there spinning it's rainbow. I had to kill it.
Safari
The nice thing about Safari is that it will eventually throw up a dialog asking if you want to stop. You should say yes. Then hit command-option i to bring up the Web Inspector. The source should pop-up saying that the Javascript exceeded the timeout. Hit the pause button in the right hand side towards the top, then refresh. The code will reload, now step through: I like using command-; because it steps through every call. If you have complex code you might never get to the end. Don't hit continue though (command-/) or you'll be back at square one.
Chrome
The most useful of the three. If you naively load the code, it will keep going forever. But, before loading the page, open the Web Inspector and select 'Load resources all the time'. Then reload the page. While the page is trying to load, you can click over to scripts and pause the Javascript while it is running. This is what I was looking for.
Code
<html>
<head></head>
<body onload="TEST.forever.loop()">
Nothing
</body>
<script type="text/javascript">
TEST = {};
TEST.forever = {
loop : function() {
var i = 0;
while (i >= 0) {
i++;
}
}
};
</script>
</html>

Javascript: Unresponsive script error

I get an error message from Firefox "Unresponsive script". This error is due to some javascript I added to my page.
I was wondering if the unresponsiveness are caused exclusively by code loops (function calling each other cyclically or endless "for loops") or there might be other causes ?
Could you help me to debug these kind of errors ?
thanks
One way to avoid this is to wrap your poor performant piece of code with a timeout like this:
setTimeout(function() {
// <YOUR TIME CONSUMING OPERATION GOES HERE>
}, 0);
This is not a bullet proof solution, but it can solve the issue in some cases.
According to the Mozzila Knoledgebase:
When JavaScript code runs for longer than a predefined amount of time, Firefox will display a dialog that says Warning: Unresponsive Script. This time is given by the settings dom.max_script_run_time and dom.max_chrome_script_run_time. Increasing the values of those settings will cause the warning to appear less often, but will defeat the purpose: to inform you of a problem with an extension or web site so you can stop the runaway script.
Furthermore:
Finding the source of the problem
To determine what script is running too long, click the Stop Script button on the warning dialog, then go to Tools | Error Console. The most recent errors in the Error Console should identify the script causing the problem.
Checking the error console should make it pretty obvious what part of your javascript is causing the issue. From there, either remove the offending piece of code or change it in such a way that it won't be as resource intensive.
EDIT: As mentioned in the comments to the author of the topic, Firebug is highly recommended for debugging problems with javascript. Jonathan Snook has a useful video on using breakpoints to debug complex pieces of javascript.
We need to follow these steps to stop script in Firefox.
Step 1
Launch Mozilla Firefox.
Step 2
Click anywhere in the address bar at the top of the Firefox window, to highlight the entire field.
Step 3
Type "about:config" (omit the quotes here and throughout) and press "Enter." A "This might void your warranty!" warning message may come up; if it does, click the "I'll be careful, I promise!" button to open the page that contains all Firefox settings.
Step 4
Click once in the "Search" box at the top of the page and type "dom.max_script_run_time". The setting is located and displayed; it has a default value of 10.
Step 5
Double-click the setting to open the Enter Integer Value window.
Step 6
Type "0" and click "OK" to set the value to zero. Firefox scripts now run indefinitely, and will not throw any script errors.
Step 7
Restart Mozilla Firefox.
Excellent solution in this question: How can I give control back (briefly) to the browser during intensive JavaScript processing?, by using the Async jQuery Plugin. I had a similar problem and solved it by changing my $.each for $.eachAsync
there could be an infinite loop somewhere in the code
start by commenting out codes to identify which section is causing it
too many loops: there might be a chance that your counter variable name clashes, causing the variable to keep resetting, causing the infinite loop.
try as much as possible to create hashes for your objects so much so that read time is O(1) and in a way caching those data
avoid using js libs as some of the methods might cause overheads. eg. .htm() vs .innerHTML
setTimeout() yes and no -- depends on how you chunkify your codes

HTML validation and loading times

Does having (approx 100) HTML validation errors affect my page loading speeds? Currently the errors on my pages don't break the page in ANY browser, but I'd spend time and clear those anyhow if it would improve my page loading speed?
If not on desktops, how about mobile devices like iPhone or Android? (For example, N1 and Droid load pages much slower than iPhone although they both use Webkit engine.)
Edit: My focus here is speed optimization not cross-browser compatibility (which is already achieved). Google and other biggies seem to use invalid HTML for speed or compatibility of both?
Edit #2: I'm not in quirks mode, i.e. I use XHTML Strict Doctype and my source looks great and its mostly valid, but 100% valid HTML usually requires design (or some other kind of) sacrifice.
Thanks
It doesn't affect -loading- speed. Bad data is transferred over the wires just as fast as good data.
It does affect rendering speed though (...in some cases... ...positively! Yeah, MSIE tends to be abysmally slow in standards mode) In most cases though, render speed will be somewhat slower due to Quirks mode which is less efficient, more paranoid and generally instead of just executing your data like a well-written program, it tries its best to fish out some meaningful content from what is essentially a tag soup.
Some validation errors like missing ALT or no / at the end of single-element tags won't affect render at all, but some, like missing a closing tag or using antiquated obsolete parameters may impact performance seriously.
It might affect loading speed, or it might not. It depends on the kind of errors you're getting.
I'd say that in most cases it's likely that it will be slower because the browser will have to handle these errors. For instance if you forgot to close a div tag, some browsers will close it for you. This takes processing time and increase the loading time.
I don't think the time delta between no error and 100 errors would be minimal. But if you have that many errors, you should consider fixing your code :)
Probably yes, and here's why.
If your code is valid to the W3C doctype you are using then the browser doesn't have to put more effort in to try and fix your code. This is called quirks mode, and it would be logical that if your code were to validate, the browser wouldn't have to try and piece the website back together.
Remembering it's always beneficial to make your code validate, if only to ensure a consistent design across the popular browsers. Finally you'll probably find that you fix the first few errors and your list of 100 errors will drastically decrease.
In theory, yes it will decrease page load times because the browser has to do less to handle errors and so on.
However it does depend on the nature of the validation errors. If you're improperly nesting tags (which actually may be valid in HTML4) then the browser would have to do a little more work working out where elements start and end. And this is the kind of thing that can cause cross-browser problems.
If you're simply using unofficial attributes (say, the target attribute on links) then support for that is either built into the browser or not. If the browser understands it, it will do something with it, otherwise it will ignore the attribute.
One thing that will ramp up your validation errors is using <br> under XHTML or <br /> under HTML. Neither should increase loading times (although <br /> takes a fraction longer to download).

Categories