I have a bunch of functions that need to be called on $(window).on('load' ...). Occasionally, the site hangs indefinitely while loading. There are a bunch of embeds and other pieces of media being pulled in from various APIs.
Is it possible to detect what is still pending without attaching an event listener to every resource?
Edit for clarification:
#PamBlam's comment below was more tuned in to the problem -- I want to be able to do this with javascript, so it could happen client side while my users are browsing.
Specifically, I'd like to be able to identify pending requests and get any relevant details, and send a note to an error logger (such as sentry) to see what specific resources are problems for users on the live site. Perhaps the only solution would be to create a new loadResource function (as suggested in some answers) that compiles these details and, after a long timeout, sends a note to the logger if it still hasn't finished. But, this seems like overkill. Also some of these resources are <iframe>s that are included in the HTML, so more work to add that in.
What I was hoping for - and I'm guessing that this doesn't exist, as I assume javascript doesn't have permission to see what's happening on the browser level - was something that could, after a long time out, essentially look at the Network tab of dev tools and send a report of what is still pending.
One of the best ways to debug JavaScript is Chrome DevTools(while I am a big advocate of Firefox, in this case Chrome is just mind blowing). Use debug breakpoints and network to the best of your capabilities.
Appending the link for referral
https://developers.google.com/web/tools/chrome-devtools/
Count how many resources are loading, and decrement the count when each is finished. When the count is zero all resources are done.
var resourcesPending = 0;
// Load some resources
resourcesPending++;
loadAResource(function(){
resourcesPending--;
if(!resourcesPending) allResourcesLoaded();
});
resourcesPending++;
loadAResource(function(){
resourcesPending--;
if(!resourcesPending) allResourcesLoaded();
});
// etc..
Related
I'd like to run some external JavaScript with a time restriction, so that if it takes more than N seconds it will be stopped.
Some browsers, e.g. Firefox, already do this with a dialog that asks if you want to allow a script to keep running. However, I'm looking for a bit more:
I want to set my own time limit rather than use the browser's default (e.g., I believe Chrome's is much longer than Firefox's).
I want to be able to do this on a per-script basis, not per-page. One page may contain multiple scripts that I want to restrict in this way (hence my idea to use <iframe> elements).
I was thinking it would be very convenient if there were simply an attribute I could attach to an <iframe>—e.g., something like js-time-limit="5000" (I just made that up)—but I haven't been able to find anything like that.
Is this possible? To put a configurable time limit on JavaScript execution in a browser?
If the iframe is doing computation work and doesn't need to access the DOM, then use web workers: https://developer.mozilla.org/en-US/docs/Web/Guide/Performance/Using_web_workers
Here is also a library that can abstract away the hard parts for you! http://adambom.github.io/parallel.js
Important parts:
Dedicated Web Workers provide a simple means for web content to run scripts in background threads.
If you need to immediately terminate a running worker, you can do so by calling the worker's terminate() method: myWorker.terminate();
Browser compatibility
Chrome Firefox (Gecko) Internet Explorer Opera Safari (WebKit)
3 3.5 (1.9.1) 10 10.60 4
For posterity: my original goal was to allow users of a website to submit JS code and run it in the background with a time limit so that, e.g., infinite loops don't wreak havoc on the CPU.
I created a library called Lemming.js using the approach Joe suggested. You use it like this:
var lemming = new Lemming('code to eval');
lemming.onTimeout(function() {
alert('Timed out!');
});
lemming.onResult(function(result) {
alert('Result: ' + result);
});
lemming.run({ timeout: 5000 });
You can check out the GitHub repo for more details.
I'm doing a couple of things with jQuery in an MTurk HIT, and I'm guessing one of these is the culprit. I have no need to access the surrounding document from the iframe, so if I am, I'd like to know where that's happening and how to stop it!
Otherwise, MTurk may be doing something incorrect (they use the 5-character token & to separate URL arguments in the iframe URL, for example, so they DEFINITELY do incorrect things).
Here are the snippets that might be causing the problem. All of this is from within an iframe that's embedded in the MTurk HIT** (and related) page(s):
I'm embedding my JS in a $(window).load(). As I understand it, I need to use this instead of $(document).ready() because the latter won't wait for my iframe to load. Please correct me if I'm wrong.
I'm also running a RegExp.exec on window.location.href to extract the workerId.
I apologize in advance if this is a duplicate. Indeed - after writing this, SO seems to have a made a good guess at this: Debugging "unsafe javascript attempt to access frame with URL ... ". I'll answer this question if I figure it out before you do.
It'd be great to get a good high-level reference on where to learn about this kind of thing. It doesn't fit naturally into any topic that I know - maybe learn about cross-site scripting so I can avoid it?
** If you don't know, an MTurk HIT is the unit of work for folks doing tasks on MTurk. You can see what they look like pretty quick if you navigate to http://mturk.com and view a HIT.
I've traced the code to the following chunk run within jquery from the inject.js file:
try {
isHiddenIFrame = !isTopWindow && window.frameElement && window.frameElement.style.display === "none";
} catch(e) {}
I had a similar issue running jQuery in MechanicalTurk through Chrome.
The solution for me was to download the jQuery JS files I wanted, then upload them to the secure amazon S3 service.
Then, in my HIT, I called the .js files at their new home at https://s3.amazonaws.com.
Tips on how to make code 'secure' by chrome's standards are here:
http://developer.chrome.com/extensions/contentSecurityPolicy.html
This isn't a direct answer to your question, but our lab has been successful at circumventing (read hack) this problem by asking workers click on a button inside the iframe that opens a separate pop-up window. Within the pop-up window, you're free to use jQuery and any other standard JS resources you want without triggering any of AMT's security alarms. This method has the added benefit of allowing workers to view your task in a full-sized browser window instead of AMT's tiny embedded iframes.
Update
Ok - I now know where the multiple page loads are coming from! (However, the mystery is not yet solved).
It seems that immediately after a request is made to a page containing AdSense ads, Google makes a request for exactly the same URL (one or more times)
e.g. this is what the logs look like (note requests from Mediapartners-Google):
2011-07-20 09:50:20 xxx.xxx.xxx.xxx GET /requestedURL/ 80 - xxx.xxx.xxx.xxx Mozilla/5.0+(Browserstring removed) 200 0 0 1140
2011-07-20 09:50:20 xxx.xxx.xxx.xxx GET /requestedURL/ 80 - 66.249.72.52 Mediapartners-Google 200 0 64 218
2011-07-20 09:50:22 xxx.xxx.xxx.xxx GET /requestedURL/ 80 - 66.249.72.52 Mediapartners-Google 200 0 0 171
(I should have paid more attention to the IIS logs, rather than my own application logs - it just didn't occur to me that these multiple, identical, simultaneous request could have been coming from different sources). This also explains why I couldn't find anything strange when analysing the request with WireShark, and why fiddler didn't show anything strange.
So the question for the bounty now becomes:
Why is google making these requests so quickly after the page is requested? (I know they need to asses the page for content, but immediately after, and multiple times sees like abuse to me.)
What can I do to stop this?
And out of interest:
Has anyone else seem something similar in their logs? (or is this something weird with my AdSense account)
Ok, I'll apologise in advance for the length!...
This question is realted to this one, regarding Google Adsense Javascript code causing errors. (of the form Unable to post message to googleads.g.doubleclick.net. Recipient has origin something.com)
I won't duplicate all of the information there, but the conclusion seems to be that the AdSense JS is buggy. (please read the question for background if you have time).
I knew about this problem for some time, but decided to live with the JS errors rather than pulling AdSense from the site.
However, Recently I noticed that in my ASP.NET MVC2 application, Controller Actions seemed to be called twice per page request (sometimes even 3 times). Odly, it was only happening on the production server. After some thought I relalised that one difference between the Dev and Production environments was that the AdSense javscript was only active in production.
To test this I removed all adsense code from one of the production pages, and lone behold, the multiple-page-load problem went away!
I thought that perhaps it was the fact that there were general JS errors on the page that was causing the problem, so to test this I introduced some simple errors into my own JS code, however this did not cause the multiple-page-load problem to reappear.
One known situation where pages can be called multiple times per request is when there are image tags with empty src attributes, or external resource references with empty src attributes. Crucially, The most upvoted answer to the AdSense JS Bug question notes that:
"The targetOrigin argument in this call, this.la is set to
http://googleads.g.doubleclick.net. However, the new iframe was
written with its src set to about:blank."
This seems eerily similar to the empty src issue.... This seems too much of a co-incidence, and currently I'm of the opinion that this is the problem.
[EDIT: This was a red herring]
However, I've no idea wehre to go from here. These multiple action calls are causing real problems (I'm having to use code blocking, serialised transactions, and all sorts of nasty hacks to limit adverse effects). Of course, I could be barking up the wrong tree entirely - I'm puzzled that I can't find any other references to this, given the ubiquity of AdSense, and the nature of the problem (but then again the conclusions of the AdSense JS Bug question are also surprising). I would love this to turn out to be a stupid mistake on my part, so I need a sanity check.
I'd like to ask the community:
Has anyone else experienced this problem?, or can anyone who is using AdSense replicate and confirm it? [See note below]
Assuming the problem is what it seems, what can I do? (other than pulling AdSense of course)
If not, then what might be causing this?
To Sumarise:
- My actions are being executed 2 (sometimes 3) times per page request.
THIS ONLY HAPPENS WHEN GOOGLE ADSENSE ADS ARE PRESENT
I removed all AdSense JS and introduced an error into my own JS : Actions are called only once...
A similar problem can happen when empty src properties are present on the page
An answer to a previous question sumarises that the AdSense JS sets a src="about:blank" on an iFrame
I have come to the conclusion that the src="about:blank" from the AdSense code is the most likely source of the problem.
If I disable JavaScript on the browser, the problem goes away
Just to document the things I have ruled out:
This is happening across browsers: Chrome(12) Firefox(5) and IE(8).
I have dissabled all plugins on browsers (YSlow, Firebug etc...)
There are no empty src (src=""/src="#") for images, or other external resources in the html in my code
There are no empty url references in the css ( url('') )
It's unlikely to be server side code/config problem, as it doesn't happen in Dev (and of the few differences between dev and production is the absence of AdSence JS in Dev)
Note: For anyone looking to replicate this, it should be noted that, strangely, when the multiple action calls happen Fiddler shows only one request being sent to the server. I have no idea why this should be the case, but the server logging doesn't lie :) Perhaps someone who has prior experience with this problem when caused by empty src attributes in img tags can say whether they have seen the same behaviour with Fiddler.
Requested extra information
HTML (#Ivan)
Here's how I'm implementing the Adsense (ids removed)
<%# Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %>
<div class="ad">
<%if (!HttpContext.Current.IsDebuggingEnabled) { %>
<script type="text/javascript"><!--
google_ad_client = "ca-pub-xxxxxxxxxxxxxxx";
/* xxxxxxxxxxxxxxx */
google_ad_slot = "xxxxxxxxx";
google_ad_width = 728;
google_ad_height = 15;
//-->
</script>
<script type="text/javascript" src="http://pagead2.googlesyndication.com/pagead/show_ads.js">
</script>
<%} else { %>
<img src="/Content/images/googleAdMock728x15_4_e.gif" width="728" height="15" />
<%} %>
</div>
This is being inserted by a RenderPartial in the View:
<% Html.RenderPartial("AdSense_XXXXXX"); %>
TCP Logging (#Tomas)
So far I have done a wireshark capture:
on client when requesting page on production with problem
on client when requesting page on production without problem (i.e. Adsense Removed)
I can't really see a significant difference between the two (although my network skills are not great). One thing to note is that they both seem to have a TCP retransmittion of the HTTP request immediately after the initial request - I don't know the significance of that. I can confirm though that in case 1 the server logs reported 2 executions, and in case 2 only one execution.
Next I will try TCP logging on the server side in both cases, and post results here.
Mediabot is the name given to the web crawler that Google uses to crawl webpages for purposes of analysing the content so Google AdSense can serve contextually relevant advertising to the page.
In my experience, it is impredictable and, yes , it can be pretty heavy and annoying.
If you don't want Mediapartner bot to access a specific page, you can disallow it in your robots.txt with:
#
# disallow adsense bot
#
User-agent: Mediapartners-Google
Disallow: path to your specific page
This will have the drawback of service untargeted ads from that specific page.
If you are seeing this pattern always on the same page with different query string, adding the canonical rel could ease the pain.
If you can't resolve this issue, and you see it as an abuse, don't esitate to ask help in the Crawling Indexing and Ranking Google support.
Given that the behaviour that you are observing appear to be hard to avoid, can we rather focus on workarounds?
Can you differentiate requests based on UserAgent, and thus filter out requests.
Could that be a viable approach for you?
If so then you could probably base upon this approach: http://blog.flipbit.co.uk/2009/07/writing-iphone-sites-with-aspnet-mvc.html
Here they detect iPhones, but the consept is the same for Mediapartners-Google bot.
Aside from the embedding of the AdSense code itself, there are two things related to AdSense that differ in your two test cases:
What else happens when !HttpContext.Current.IsDebuggingEnabled? This appears to be the de-facto production flag; maybe there is some other nuance somewhere that is happening that depends on this same flag.
Is it possible that Html.RenderPartial("AdSense_XXXXXX") is somehow causing your Controller to jump back to the beginning of its execution?
From your description, it seems like the execution is happening twice on the server but only one request is being sent from the client. This implies a server error, and these two lines are the crux of your AdSense triggering. To further narrow it down, try embedding the AdSense partial directly instead of calling Html.RenderPartial(). If that doesn't change the result, it might be worth a sanity check on what else switches on HttpContext.Current.IsDebuggingEnabled.
Failing that, it might be helpful to know whether your server-side logging takes place as the request is received, before the response is sent, or after the response is sent.
Yes, I just detected this during a TeamView session with my partner. On my box my main page ONLY for my site loads once per request.
Then by coincidence while using Fiddler my partner is getting 4 requests to the sample page. It is a 1.5 MB page with big scripts and lotsa other dependencies so this was truly a WTF moment as I have never seen anything like this in 15 years of web development.
If google is doing this I must say they should realize today's sites might have very big pages and very big audiences. That could mean they are jacking bandwidth by a factor of 4 per request. Like I said, WTF?????
I wish this Q&A had a more definitive resolution.
I do use Google Translate widget but this is only occurring on his box and for the main page. The other pages also use the translate widget and I do request my JQUERY via the google CDN. Could anything Google be doing this.
As part of a loading screen for an offline-enabled web application I'm building (using a cache manifest), I'd like to display an accurate progress bar that lets users know which files has thus far been downloaded and which are still pending. Something like the following:
Loading...
/assets/images/logo.png: loaded
/assets/images/splashImage.png: pending
I know that I can use the cache "pending" event, but I don't see that the event arguments have any data associated with them.
Is there any way to do this?
There is a progress event that gets triggered when each file downloads, however its payload does not include the file name in any browser that I've tested with (Chrome, Safari, FF beta). Chrome displays the file name in the Console (though as far as I know it's inaccessible to JS), but neither Safari nor FF even go that far. And from what I've seen, the files do not download in the same order that they're listed in the manifest, so there's not even a way to generate an ordered list then knock them off one at a time.
So in answer to your question, no, there isn't any way to do this right now. It's possible that in the future the progress event will include the filename - at least in some browsers - but right now this isn't possible.
I should add that in Chrome (not in Safari or FF) you can at least get a count of files to be downloaded, allowing you to at least calculate an accurate progress bar. To get this in Chrome you'd use the following:
function downloadProgress(e) {
totalfiles = Number(e.total);
}
window.applicationCache.addEventListener("progress", downloadProgress, false);
However this will error out in other browsers, so you need to wrap a try/catch or some other method (typeof(e.total)) to avoid the error.
This is a few years late, but maybe it'll help someone else who's researching this.
It doesn't list the files or anything, but it shows an accurate(ish) progress bar based on the total number of files loaded. It may still need a little work...
https://github.com/joelabeyta/app-cache-percent-bar
We recently started using SVN Keywords to automatically append the current revision number to all our <script src="..."> includes (so it looks like this: <script language="javascript" src="some/javascript.js?v=$Revision: 1234 $"> </script>). This way each time we push a new copy of the code to production, user caches won't cause users to still be using old script revisions.
It works great, except for IE6. For some reason, IE6 sporadically acts as though some of those files didn't exist. We may get weird error statements like "Unterminated String Literal on line 1234," but if you try to attach a debugger process to it, it won't halt on this line (if you say "Yes" to the debugger prompt, nothing happens, and page execution continues). A log entry for it shows up in IIS logs, indicating the user is definitely receiving the file (status code 200, with the appropriate amount of bytes transferred).
It also only seems to happen when the pages are served over https, not over standard http. To further compound things, it doesn't necessarily happen all the time; you might refresh a page 5 times and everything works, then you might refresh it 20 more times and it fails every time. For most users it seems to always work or else to always fail. It is even unpredictable when you have multiple users in a corporate environment whose security and cache settings are forcibly identical.
Any thoughts or suggestions would be greatly appreciated, this has been driving me crazy for weeks.
Check your log with fiddler2 to make sure the browser request the page, and do not use the cache instead. Also check the URL of the JS script and the header returned.
Are you using GZip? There has been issues reported with it.
I would suggest testing using Internet Explorer Application Compatibility VPC Image. That way, you can do your tests with a 100% IE6, and not one of those plugin that claims to simulate IE6 inside another browser.
I think this is a very clever idea. However, I think the issue could be related to the spaces in the url. Technically, the url should have the spaces encoded.
See if you can customize the keywords in SVN to generate a revision number without special characters.