Log javascript errors from another subdomain - javascript

window.onerror in Firefox and Chrome seems to discard the real error message/location and always pass "Script error.", "", 0 when the offending script is on a different domain than the page itself. I have a site with separate www and static subdomains for pages and css/js, which renders error logging rather useless. Is there any way to enable the proper logging of such errors?

It doesn't sound like there's a workaround for this at the moment (I asked in the Mozilla IRC channel).
I've filed these bug reports to track this issue. Please chime in if you have opinions on what the best solution to this would be:
https://bugzilla.mozilla.org/show_bug.cgi?id=696301
https://bugs.webkit.org/show_bug.cgi?id=70574

The real solution is to use proper try { ... } catch(e) { ... } blocks in all code. However, I understand that may not always be an option.
If you don't have control over these other scripts, your next best option would be to load them as strings via JSONP, then use eval() (yes, I know eval is evil) to "inject" them into the current page. This way you'll still get the benefits of using a static domain (no cookies, CDN option, additional concurrent requests, etc), but the JS will end up being on the page's request domain.

Within your js file, on the top try doing
document.domain = "yourdomain.com"

Related

JavaScript completely "tamper safe" variables

So, here is the issue.
I have something like:
// Dangerous __hostObject that makes requests bypassing
// the same-origin policy exposed from other code.
(function(){
var danger = __hostObject;
})();
delete __hostOBject;
Am I perfectly safe knowing no script can tamper or access __hostObject?
( If they can, I have an CSRF vulnerability or worse. )
Note 1: This is for a browser extension. I have better hooks than other scripts running on the page. I execute before them and I'm done before they've even loaded.
Note 2: I know this has been asked multiple times for scripts in general. I'm wondering if it's possible if I know I load before any other scripts.
Provided that the __hostObject is deletable, the code in your question is safe.
However, I assume that your real code is slightly more complicated. In that case, very careful coding is required, because the page can change built-in methods (e.g. Function.prototype.call) to get into your closure and do whatever evil things they want. I had successfully abused functionality of extension frameworks such as Kango and Crossrider via this method when I performed such a test.
Won't simply adding a breakpoint and reloading the script expose your __hostObject

jQuery on MTurk, why does Chrome report "Unsafe JavaScript attempt to access frame with URL"?

I'm doing a couple of things with jQuery in an MTurk HIT, and I'm guessing one of these is the culprit. I have no need to access the surrounding document from the iframe, so if I am, I'd like to know where that's happening and how to stop it!
Otherwise, MTurk may be doing something incorrect (they use the 5-character token & to separate URL arguments in the iframe URL, for example, so they DEFINITELY do incorrect things).
Here are the snippets that might be causing the problem. All of this is from within an iframe that's embedded in the MTurk HIT** (and related) page(s):
I'm embedding my JS in a $(window).load(). As I understand it, I need to use this instead of $(document).ready() because the latter won't wait for my iframe to load. Please correct me if I'm wrong.
I'm also running a RegExp.exec on window.location.href to extract the workerId.
I apologize in advance if this is a duplicate. Indeed - after writing this, SO seems to have a made a good guess at this: Debugging "unsafe javascript attempt to access frame with URL ... ". I'll answer this question if I figure it out before you do.
It'd be great to get a good high-level reference on where to learn about this kind of thing. It doesn't fit naturally into any topic that I know - maybe learn about cross-site scripting so I can avoid it?
** If you don't know, an MTurk HIT is the unit of work for folks doing tasks on MTurk. You can see what they look like pretty quick if you navigate to http://mturk.com and view a HIT.
I've traced the code to the following chunk run within jquery from the inject.js file:
try {
isHiddenIFrame = !isTopWindow && window.frameElement && window.frameElement.style.display === "none";
} catch(e) {}
I had a similar issue running jQuery in MechanicalTurk through Chrome.
The solution for me was to download the jQuery JS files I wanted, then upload them to the secure amazon S3 service.
Then, in my HIT, I called the .js files at their new home at https://s3.amazonaws.com.
Tips on how to make code 'secure' by chrome's standards are here:
http://developer.chrome.com/extensions/contentSecurityPolicy.html
This isn't a direct answer to your question, but our lab has been successful at circumventing (read hack) this problem by asking workers click on a button inside the iframe that opens a separate pop-up window. Within the pop-up window, you're free to use jQuery and any other standard JS resources you want without triggering any of AMT's security alarms. This method has the added benefit of allowing workers to view your task in a full-sized browser window instead of AMT's tiny embedded iframes.

Prevent Javascript from Stopping when Error is Encountered

Our product inserts a script into client's websites, kind of like a live chat box.
Often, clients' websites have buggy javascript that also stops our code (the browser stops execution when errors are encountered). Is there any way to make our code still execute even though there are errors in the console about things like undefined methods or variables?
Thanks for your help.
The short answer is that you really can't.
"Solution" #1: You could insist that YOUR 3rd party code run before anyone else's. In most cases, this isn't possible or even desirable.
"Solution" #2: You could insist that the 1st party engineers wrap all 3rd party code in try/catch blocks. But, this solution really doesn't buy you any guarantee, because very frequently 3rd party libraries attach additional <script> tags to the page - these would not fall under the "jurisdiction" of the try/catch scope enclosing the code which created this/these tag(s).
"Solution" #3: You could build YOUR app entirely within the scope of an <iframe>, thereby avoiding the issue entirely. Unfortunately, even if you're very smart, you'll quickly run into cross domain violations, 3rd party cookie restrictions, and the like. It's very probable that this will not work for you.
"Solution" #4: You could explain the issue to your client, and have them demand that the other 3rd party code run cleanly. I say this is a "solution" because, frankly, it's not a "solution" to your question if your question is how to avoid doing exactly this.
Unfortunately, option #4 is your best bet. It may help if you observe other 3rd party libraries "breaking" in the same fashion: you can tell your client "hey, it's not just me - X, Y, and Z are all also 'broken' because of <name of other 3rd party library>." It may cause them to put the heat on the offending code, which makes the web a happier place for all involved.
As others have said, continuing after an error might not be the best thing to do but you can try this:
function ignoreerror()
{
return true
}
window.onerror=ignoreerror();
More details here
The onerror event fires whenever an JavaScript error occurs (depending
on your browser configuration, you may see an error dialog pop up).
The onerror event is attached to the window object, a rather unusual
place to take refuge in, but for good reason. It is attached this way
so it can monitor all JavaScript errors on a page, even those in the
section of the page.
Opera has a page with more details
Browsers supporting window.onerror
Chrome 13+
Firefox 6.0+
Internet Explorer 5.5+
Opera 11.60+
Safari 5.1+
You can't from your code - they need to use try/catch for questionable pieces of script.
You could have them insert an iframe into their page instead of you trying to inject code using a script tag like so: http://jsfiddle.net/EzMGD/ Notice how the script throws an error yet we can still see the content in the iframe. The iframe should help from using each others variables if applicable.
<script>
MeaningOfLife();
</script>
<iframe src="http://bing.com"></iframe>​
Or inject the code so it's the very first or very last script.
Well, to me that work fine:
element = document.querySelector('.that-pretty-element');
if (element != null) {
element.onclick = function () {
alert(" I'm working beibi ;) ");
}
}
querySelector() returns false, so, we can verify with if's

Google AdSense JavaScript causing multiple page-loads?

Update
Ok - I now know where the multiple page loads are coming from! (However, the mystery is not yet solved).
It seems that immediately after a request is made to a page containing AdSense ads, Google makes a request for exactly the same URL (one or more times)
e.g. this is what the logs look like (note requests from Mediapartners-Google):
2011-07-20 09:50:20 xxx.xxx.xxx.xxx GET /requestedURL/ 80 - xxx.xxx.xxx.xxx Mozilla/5.0+(Browserstring removed) 200 0 0 1140
2011-07-20 09:50:20 xxx.xxx.xxx.xxx GET /requestedURL/ 80 - 66.249.72.52 Mediapartners-Google 200 0 64 218
2011-07-20 09:50:22 xxx.xxx.xxx.xxx GET /requestedURL/ 80 - 66.249.72.52 Mediapartners-Google 200 0 0 171
(I should have paid more attention to the IIS logs, rather than my own application logs - it just didn't occur to me that these multiple, identical, simultaneous request could have been coming from different sources). This also explains why I couldn't find anything strange when analysing the request with WireShark, and why fiddler didn't show anything strange.
So the question for the bounty now becomes:
Why is google making these requests so quickly after the page is requested? (I know they need to asses the page for content, but immediately after, and multiple times sees like abuse to me.)
What can I do to stop this?
And out of interest:
Has anyone else seem something similar in their logs? (or is this something weird with my AdSense account)
Ok, I'll apologise in advance for the length!...
This question is realted to this one, regarding Google Adsense Javascript code causing errors. (of the form Unable to post message to googleads.g.doubleclick.net. Recipient has origin something.com)
I won't duplicate all of the information there, but the conclusion seems to be that the AdSense JS is buggy. (please read the question for background if you have time).
I knew about this problem for some time, but decided to live with the JS errors rather than pulling AdSense from the site.
However, Recently I noticed that in my ASP.NET MVC2 application, Controller Actions seemed to be called twice per page request (sometimes even 3 times). Odly, it was only happening on the production server. After some thought I relalised that one difference between the Dev and Production environments was that the AdSense javscript was only active in production.
To test this I removed all adsense code from one of the production pages, and lone behold, the multiple-page-load problem went away!
I thought that perhaps it was the fact that there were general JS errors on the page that was causing the problem, so to test this I introduced some simple errors into my own JS code, however this did not cause the multiple-page-load problem to reappear.
One known situation where pages can be called multiple times per request is when there are image tags with empty src attributes, or external resource references with empty src attributes. Crucially, The most upvoted answer to the AdSense JS Bug question notes that:
"The targetOrigin argument in this call, this.la is set to
http://googleads.g.doubleclick.net. However, the new iframe was
written with its src set to about:blank."
This seems eerily similar to the empty src issue.... This seems too much of a co-incidence, and currently I'm of the opinion that this is the problem.
[EDIT: This was a red herring]
However, I've no idea wehre to go from here. These multiple action calls are causing real problems (I'm having to use code blocking, serialised transactions, and all sorts of nasty hacks to limit adverse effects). Of course, I could be barking up the wrong tree entirely - I'm puzzled that I can't find any other references to this, given the ubiquity of AdSense, and the nature of the problem (but then again the conclusions of the AdSense JS Bug question are also surprising). I would love this to turn out to be a stupid mistake on my part, so I need a sanity check.
I'd like to ask the community:
Has anyone else experienced this problem?, or can anyone who is using AdSense replicate and confirm it? [See note below]
Assuming the problem is what it seems, what can I do? (other than pulling AdSense of course)
If not, then what might be causing this?
To Sumarise:
- My actions are being executed 2 (sometimes 3) times per page request.
THIS ONLY HAPPENS WHEN GOOGLE ADSENSE ADS ARE PRESENT
I removed all AdSense JS and introduced an error into my own JS : Actions are called only once...
A similar problem can happen when empty src properties are present on the page
An answer to a previous question sumarises that the AdSense JS sets a src="about:blank" on an iFrame
I have come to the conclusion that the src="about:blank" from the AdSense code is the most likely source of the problem.
If I disable JavaScript on the browser, the problem goes away
Just to document the things I have ruled out:
This is happening across browsers: Chrome(12) Firefox(5) and IE(8).
I have dissabled all plugins on browsers (YSlow, Firebug etc...)
There are no empty src (src=""/src="#") for images, or other external resources in the html in my code
There are no empty url references in the css ( url('') )
It's unlikely to be server side code/config problem, as it doesn't happen in Dev (and of the few differences between dev and production is the absence of AdSence JS in Dev)
Note: For anyone looking to replicate this, it should be noted that, strangely, when the multiple action calls happen Fiddler shows only one request being sent to the server. I have no idea why this should be the case, but the server logging doesn't lie :) Perhaps someone who has prior experience with this problem when caused by empty src attributes in img tags can say whether they have seen the same behaviour with Fiddler.
Requested extra information
HTML (#Ivan)
Here's how I'm implementing the Adsense (ids removed)
<%# Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %>
<div class="ad">
<%if (!HttpContext.Current.IsDebuggingEnabled) { %>
<script type="text/javascript"><!--
google_ad_client = "ca-pub-xxxxxxxxxxxxxxx";
/* xxxxxxxxxxxxxxx */
google_ad_slot = "xxxxxxxxx";
google_ad_width = 728;
google_ad_height = 15;
//-->
</script>
<script type="text/javascript" src="http://pagead2.googlesyndication.com/pagead/show_ads.js">
</script>
<%} else { %>
<img src="/Content/images/googleAdMock728x15_4_e.gif" width="728" height="15" />
<%} %>
</div>
This is being inserted by a RenderPartial in the View:
<% Html.RenderPartial("AdSense_XXXXXX"); %>
TCP Logging (#Tomas)
So far I have done a wireshark capture:
on client when requesting page on production with problem
on client when requesting page on production without problem (i.e. Adsense Removed)
I can't really see a significant difference between the two (although my network skills are not great). One thing to note is that they both seem to have a TCP retransmittion of the HTTP request immediately after the initial request - I don't know the significance of that. I can confirm though that in case 1 the server logs reported 2 executions, and in case 2 only one execution.
Next I will try TCP logging on the server side in both cases, and post results here.
Mediabot is the name given to the web crawler that Google uses to crawl webpages for purposes of analysing the content so Google AdSense can serve contextually relevant advertising to the page.
In my experience, it is impredictable and, yes , it can be pretty heavy and annoying.
If you don't want Mediapartner bot to access a specific page, you can disallow it in your robots.txt with:
#
# disallow adsense bot
#
User-agent: Mediapartners-Google
Disallow: path to your specific page
This will have the drawback of service untargeted ads from that specific page.
If you are seeing this pattern always on the same page with different query string, adding the canonical rel could ease the pain.
If you can't resolve this issue, and you see it as an abuse, don't esitate to ask help in the Crawling Indexing and Ranking Google support.
Given that the behaviour that you are observing appear to be hard to avoid, can we rather focus on workarounds?
Can you differentiate requests based on UserAgent, and thus filter out requests.
Could that be a viable approach for you?
If so then you could probably base upon this approach: http://blog.flipbit.co.uk/2009/07/writing-iphone-sites-with-aspnet-mvc.html
Here they detect iPhones, but the consept is the same for Mediapartners-Google bot.
Aside from the embedding of the AdSense code itself, there are two things related to AdSense that differ in your two test cases:
What else happens when !HttpContext.Current.IsDebuggingEnabled? This appears to be the de-facto production flag; maybe there is some other nuance somewhere that is happening that depends on this same flag.
Is it possible that Html.RenderPartial("AdSense_XXXXXX") is somehow causing your Controller to jump back to the beginning of its execution?
From your description, it seems like the execution is happening twice on the server but only one request is being sent from the client. This implies a server error, and these two lines are the crux of your AdSense triggering. To further narrow it down, try embedding the AdSense partial directly instead of calling Html.RenderPartial(). If that doesn't change the result, it might be worth a sanity check on what else switches on HttpContext.Current.IsDebuggingEnabled.
Failing that, it might be helpful to know whether your server-side logging takes place as the request is received, before the response is sent, or after the response is sent.
Yes, I just detected this during a TeamView session with my partner. On my box my main page ONLY for my site loads once per request.
Then by coincidence while using Fiddler my partner is getting 4 requests to the sample page. It is a 1.5 MB page with big scripts and lotsa other dependencies so this was truly a WTF moment as I have never seen anything like this in 15 years of web development.
If google is doing this I must say they should realize today's sites might have very big pages and very big audiences. That could mean they are jacking bandwidth by a factor of 4 per request. Like I said, WTF?????
I wish this Q&A had a more definitive resolution.
I do use Google Translate widget but this is only occurring on his box and for the main page. The other pages also use the translate widget and I do request my JQUERY via the google CDN. Could anything Google be doing this.

IE6 Javascript problems with $Revision$ in the filename

We recently started using SVN Keywords to automatically append the current revision number to all our <script src="..."> includes (so it looks like this: <script language="javascript" src="some/javascript.js?v=$Revision: 1234 $"> </script>). This way each time we push a new copy of the code to production, user caches won't cause users to still be using old script revisions.
It works great, except for IE6. For some reason, IE6 sporadically acts as though some of those files didn't exist. We may get weird error statements like "Unterminated String Literal on line 1234," but if you try to attach a debugger process to it, it won't halt on this line (if you say "Yes" to the debugger prompt, nothing happens, and page execution continues). A log entry for it shows up in IIS logs, indicating the user is definitely receiving the file (status code 200, with the appropriate amount of bytes transferred).
It also only seems to happen when the pages are served over https, not over standard http. To further compound things, it doesn't necessarily happen all the time; you might refresh a page 5 times and everything works, then you might refresh it 20 more times and it fails every time. For most users it seems to always work or else to always fail. It is even unpredictable when you have multiple users in a corporate environment whose security and cache settings are forcibly identical.
Any thoughts or suggestions would be greatly appreciated, this has been driving me crazy for weeks.
Check your log with fiddler2 to make sure the browser request the page, and do not use the cache instead. Also check the URL of the JS script and the header returned.
Are you using GZip? There has been issues reported with it.
I would suggest testing using Internet Explorer Application Compatibility VPC Image. That way, you can do your tests with a 100% IE6, and not one of those plugin that claims to simulate IE6 inside another browser.
I think this is a very clever idea. However, I think the issue could be related to the spaces in the url. Technically, the url should have the spaces encoded.
See if you can customize the keywords in SVN to generate a revision number without special characters.

Categories