angular + requirejs: quick refreshing pages cause page stop working in chrome - javascript

during the development with angular + requirejs. I met a weird problem.
if I quick refresh page(like pressing F5), after a few times, I will get the following error message and the page is not working any more even refresh the page again, unless clean the cache and open the page on new tab:
Error: $digest already in progress
at Error (<anonymous>)
at beginPhase (http://www.caoglish.info/angular_bug/assets/vendor/angular/angular.js:8495:15)
at Object.Scope.$apply (http://www.caoglish.info/angular_bug/assets/vendor/angular/angular.js:8297:11)
at done (http://www.caoglish.info/angular_bug/assets/vendor/angular/angular.js:9357:20)
at completeRequest (http://www.caoglish.info/angular_bug/assets/vendor/angular/angular.js:9520:7)
at XMLHttpRequest.xhr.onreadystatechange (http://www.caoglish.info/angular_bug/assets/vendor/angular/angular.js:9490:11)
at http://www.caoglish.info/angular_bug/assets/vendor/angular/angular.js:9499:11
at sendReq (http://www.caoglish.info/angular_bug/assets/vendor/angular/angular.js:9333:9)
at $http (http://www.caoglish.info/angular_bug/assets/vendor/angular/angular.js:9124:17)
at Function.$http.(anonymous function) (http://www.caoglish.info/angular_bug/assets/vendor/angular/angular.js:9267:18)
this error seem to only happen in chrome. I try to trigger it on IE and Firefox, it was not happening. however, I try it in chrome on slow computer, it also not happen.
the computer which I triggered this issue has i7 cpu, 8gb ram.angular I using is 1.0.8.
I attached my code which isolated this issue. my project code is more complex then this, so less times to trigger it. the isolated code may requires press more few times on F5 to trigger the issue.
the isolated example code: Link
how to trigger the bug: video
quick pressing F5 until it happen.
I create a issue post on angular project: Link

This question may be dead, but I've seen the same behavior. And it's definitely not an Angular issue. It's just a simple race condition, and it's more or less unavoidable. When you hit F5, the browser halts everything going on in the "sandbox" for security reasons: standard requests for things like IMG content, XHR requests, the main thread your Angular code runs in, etc. It's extremely fast but not instant, and if you work hard to disrupt it, you succeed.
IMO the root question here should be whether this is "bad", and to me, that means does it either a) create a security vulnerability, or b) do something that could corrupt important data.
You can rule out the first because only code already running in the browser at the moment F5 was pressed could POSSIBLY try to exploit whatever edge cases come out of the race condition here. And if it's already running in the browser, it can do anything it wants because it's already "within the walls" so to speak.
You can rule out the second because you aren't supposed to design Web apps that require 100% confidence that the next line of code will be executed... because you know that's never certain. The user could just close the browser any time they wanted to, and once you assume that you have to code in such a way to tolerate these kinds of conditions anyway.

Related

OWASP ZAP Not receiving Alerts for subsequent active scan

I have been using ZAP to find any final kinks for a website I'm working on. Everything is working great except for I've noticed that there are no alerts being logged in the ZAP gui when I run an active scan following a passive spider.
The initial passive scan for a new session logs alerts just fine but I'd really like to see the alerts from the active scan. Am I missing something? I tried restarting a new session and going straight to attacking but it's still not logging anything. Does it maybe need to finish before it starts logging the alerts? I have checked the generated html report and it doesn't indicate whether the alert was flagged by a passive or active scan so I really can't tell. I doubt there are so few vulrnabilities in my little web app.
If anyone has an idea as to what setting I'm missing or if I'm doing something wrong I'd appreciate the advice.
Ah, I think I may have found what was going on. I checked the real time scan progress and ZAP skipped many attacks because of low rule limits I set to speed up the scan.
How are you exploring your app?
The number of requests are very low, that suggests to me that you are not exploring your app effectively.
You can either explore the app manually (which proxying your browser through ZAP) or using automation via:
The standard spider (fast, but doesnt handle javascript so well)
The ajax spider (slower, but launches browsers so handled JS well)
Your own unit tests (good but only if you have some)
Have a look at you app in the Sites tree - if it doesnt appear to be showing as many pages as you expect then you need to focus on exploring your app more effectively.
The active scanner doesnt do any exploring, it only attacks the urls you've found by other means.

Perform action before Chrome closes itself

I'm working on Chrome extension which needs to perform an action just before Chrome closes. Is there any method like chrome.window.onClose.addListener(...), or chrome.runtime.onClose.addListener(...) to ensure that something will be done and then chrome will close itself?
I've been struggling with this problem for two weeks. Here are the options for potential solutions that I've found, but they didn't work.
My investigation results:
Using function: chrome.runtime.onSuspend.addListener(...) - I don't know why, but it doesn't work at all for me. For example, I've tried to write a callback for this event, which tries to add hardcoded data to the indexed DB, but it doesn't add it. Description of this method even says that the callback is not guaranteed to be completed. OnSuspend documentation
Sent to the event page just before it is unloaded. This gives the extension opportunity to do some cleanup. Note that since the page is unloading, any asynchronous operations started while handling this event are not guaranteed to complete.
Chrome working in a background - with this option my extensions seems to work, but... only on Windows older than Windows 10. I've checked few options and on my other computer, which has Windows 7 installed, processes connected to Chrome are closing more slowly, which gives time for my extension to perform necessary tasks. Unfortunately, Windows 10 kills all the processes much faster. I've check option "continue running background apps when google chrome is closed", but it doesn't change anything. I've also enabled flag "#enable-push-api-background-mode", it hasn't helped either.
Keep Chrome running in the background on Win10, Enable flag to keep Chrome processes running
chrome.app.window.current().onClosed - I've found a similar question on Stack Overflow, and one of the answers was the code mentioned above. The problem is when I try to type chrome.app.win... inside console, it doesn't show any suggestions both in background script and content script. Google's documentation doesn't mention any permission that I've to add inside my manifest.json to get access to this functionality.Stack Overflow similar question, Google's documentation about chrome.app
Methods build in web browser - I've thought that method window.onclose might be useful in my case. I've performed the same test as for chrome.runtime.onSuspend, but the result was exactly the same. Documentation
I've stuck and haven't got any idea how to solve my problem. Maybe I missed something important? Hope you will help me.

my app often causes chrome tab crash - how I can fix this?

My app uses a lot of javascript with backbone.js that manipulate DOM triggered by various events. It sometimes causes tab crash on Google Chrome(just tab crash and not the whole chrome crash). We have been investigating what is actually causing this issue, but there is no clue. We monitored memory by tab from chrome task manager, but crash happens even when memory use is small.
Is there any way to debug this kind of issue? We have no clue in identifying what the problem is.
UPDATE
The problem is that it is not easy to replicate the crash intentionally. It sometimes happen to some users. And those users normally repeatedly experience that (typically after clicking submit button). On the other hand, for other users, Chrome still works fine even though the tab starts to use over 200M memory after complicated DOM manipulation. Using profiling tool on developers tool might be one way, however it looks really a lot of work till identifying the issue. Would be great if somebody knows efficient way to identify what the problem is...
What we also know is that we have been suffering from memory leak. So, we started to unbind events once DOM the events are bound to got deleted. That helped us avoid huge memory usage as long as we monitor from task manager. However, we do not know we have done this well enough and it has something to do with the tab crash...
Open up the developer tools and click on the Console tab and add some console.log(message); throughout your code to see where it makes it before it crashes. Without more information there isn't much else to do.

IE Freezing from onbeforeunload Request

I've run into an issue where Internet Explorer 7, 8 and 9 freezes because of a request that gets made in the onbeforeunload JavaScript event. This is a complex site with a lot of things going on, but I've managed to isolate the issue into a small test. Exact steps to reproduce in IE7:
Open Internet Explorer 7
Keep one tab open the whole time your testing (to keep the same session)
Open a new tab and go to http://www.brillbrothers.com/LockIE/test2/
Close the tab
Repeat steps 3&4 until requests start to hang (normally takes two times for me). Once they are hanging, the requests don't even get made by IE any longer according to the web proxies / network monitors I'm running.
Steps to reproduce in IE8 are about the same, although the results are a bit different. It normally works the first two times, then intermittently works for a while, then the requests start hanging every time.
In IE9, the steps are still the same, but it takes even longer to reproduce the issue.
I can't reproduce this in IE10.
I've also found that adjusting some registry values can make the problem better. If I change the max connections per domain higher (kb # 282402), the problem takes longer to reproduce. I do have complete control over the browser in this case, since this is used for an internal tool, so any sort of registry change, etc, is fine as long as it won't cause any different problems.
I've also setup a script to run the steps above automatically. It basically opens a new window to the test URL, waits a few seconds, then closes the window. You can run that here to see the problem quickly (you'll have to allow the page to open new windows): http://brillbrothers.com/LockIE/test2/runTest.html
The crazy thing is almost everything seems to affect the outcome. If I set different cookies in the AJAX call, change the content of the AJAX call, etc, it can make it work differently. Completely crazy. I'll be seriously impressed if someone has a solution to this. :)

How do I sustain JavaScript execution with custom error handling in IE6?

I am writing an application that will have some users on Internet Explorer 6. These users on IE6 have their browsers set to debug JavaScript errors. If a JS error occurs on the page, a window will pop-up and the user is asked if they would like to 'continue running scripts on this page'.
However, my application has its own internal try/catch system for handling/logging errors and prompting the user what to do. In my error handling system, I distinguish between fatal errors and non-fatal errors. With non-fatal errors, the user is prompted to re-try after a moment and we allow JavaScript to continue executing. With fatal errors (e.g., data needed for rendering the initial page load is missing), JavaScript execution is halted to prevent further error and the user is prompted to reload the page.
The problem with IE6 is that when script debugging is turned on, JavaScript execution is halted no matter what--there's no opportunity for the user to press the 'Yes, continue running scripts on this page'. So even if the error was non-fatal, it's game over, user has to reload the page.
Does anyone know a way to get around this and prevent IE6 from halting JavaScript execution? Unfortunately, asking my users to change their IE settings is not an option.
I felt we got off on the wrong foot and I think this is a valid question that deserves an answer. Digging into this issue for you - what you are trying to accomplish is to essentially give an IE6 specific directive to the client to cease exception bubbling
There is a useful (perhaps) article about the IE6 debugger and it's related scriptable bits :
overview : http://blogs.msdn.com/b/ie/archive/2004/10/26/247912.aspx
debugger directive MSDN : http://msdn.microsoft.com/en-us/library/0bwt76sk(vs.71).aspx
Therein you can find references to both Debugger and Stop directives. This is an old article (04) and it was the most relevant to your issue I could find. Good luck.
It's not possible, certain errors will always halt JavaScript execution in IE6.
Perform more thorough checking of any input data. Errors just should not appear at all.
BTW, consider using JSONP instead of XHR -- failing to load a file (JSONP) should not trigger JS exception.

Categories