I have a large website that is using two large online advertisement "Remnant" providers. These providers are such that they start and stop ad campaigns on a regular basis that run on our website.
One of the ads coming from one of the providers is incorrectly making a request to:
/eyeblaster/addineyev2.html
I have determined that the file being requested is used by some websites when the ads on the website are served via iframes. This file in theory would circumvent the cross domain restrictions so that the ad provider could resize the iframe using javascript within the iframe.
I determined this use of the file by stumbling upon this support document:
http://support.google.com/dfp_premium/bin/answer.py?hl=en&answer=1085693
My problem is that our websites do not use iframes to deliver advertisements, so the requests going to the "/eyeblaster/addineyev2.html" URI results in a 404 error, and is unnecessary. Because the error is coming from a large vendor-provided CMS the error renders with our Google Analytics tracking code on it. This has the result of inflating our apparent pageviews.
The pageview inflation can be very severe, because the 404 error page also contains ads. That 404 page could also load the faulty ad, resulting in a recursive loop of ads loading the exact same "/eyeblaster/addineyev2.html" 404 page.
I have thus far been unable to witness an ad making a direct request to this url via Firebug or similar developer tools. Yet, the traffic to this non-existent page is gigantic so the offending ad is certainly still in the mix. The problem is that I cannot figure out which ad is broken, so I can't tell our remnant providers to remove it. Both vendors are feigning ignorance of the issue.
I cannot remove the Google tracking code on the 404 error page, but I can add additional JavaScript to the page.
Is there any way that I could identify the ad causing a request to "/eyeblaster/addineyev2.html" by adding some javascript to the 404 error that results when trying to request that page inside an iframe?
Essentially almost a "frame buster" script that instead of busting the frame, gives information on the HTML nodes nearby the iframe element? I think it's mildly possible, but I'm not seeing a clear path at the moment.
Thanks!
To avoid that unwanted tracking you should place a dummy empty file on /eyeblaster/addineyev2.html, or, if you use nginx do something like
server {
...
location = /eyeblaster/addineyeV2.html { echo ""; }
}
or, better
server {
...
location = /eyeblaster/addineyeV2.html { return 404 "404 - page not found";}
}
If you don`t have static hosting and cannot configure a proxy server you can put a condition in your 404 page tracking via javascript
if (document.URL.indexOf('/eyeblaster/addineyeV2.html') == -1) {
doAnalyticsTracking();
}
I have found my own answer, and I'll share it here for the rare event another Web Developer is trying in vain to pinpoint an ad doing this same thing to them.
The offending digital ad was coming in with an iframe that was pointed toward "/eyeblaster/addineyev2.html" I used this knowledge, and coded the following javascript to gather information about the page that contained the iframe (ie the page with the ad on it).
if(top != self) {
$.post("/ad_diagnose/log.php", {
a: $('#ad-div-one', top.document).html(),
b: $('#ad-div-two', top.document).html(),
c: $('#ad-div-three', top.document).html(),
d: $('#ad-div-four', top.document).html(),
e: $('#ad-div-five', top.document).html(),
});
}
This JavaScript uses JQuery (which our CMS provider includes on every page anyway). It checks to see if the error page exists in an iframe (top != self), and then it grabs the raw html for every html element on the parent page that should contain an ad.
That data is wrapped into an object, and posted to a simple php script that would write a log of every value posted to it to a file.
In the end, I received a log file with a high likelihood that the offending ad code was within. I was able to do a quick grep on the file and discovered the ad with an iframe pointing toward "/eyeblaster/addineyev2.html"
I hope this helps someone else out there!
It looks like there are more publishers having this issue. I do too. Following Tals instructions I was able to log information when pointing an iframe to a 404-page on purpose, but wasn't able to catch this problem as it appears randomly and I can't check why the script is not catching it.
How about adding /eyeblaster/addineyev2.html and log from this file?
I was able to determine the owner of the script doing a simple web search. It is coming from http://www.mediamind.com/
But disabling "mediamind" in Google AdSense doesn't do the trick, so I asked their support to send me the file.
I am going to test the script and if 404-calls are getting lower. Maybe I will also use the script to check for the content being loaded and determine the exect ad url to shut it down.
Just thought I would share that this is happening over at our Ozzu website as well. I was first aware of the issue when some of our members were complaining, but I didn't look too deeply as I first thought it was an isolated instance.
Over the past month I have also noticed that my error log files on the server have been higher than normal pushing the /usr partition to around 82% of its usage. I didn't put two and two together until today as I finally started looking through the errors and it appears that this is not an isolated instance with this eyeblaster type ads. There are many thousands of users coming to our site and then getting redirected to a 404 page on our website because of this. Here is a sample of one of the errors in our log file, and it appears that numerous networks are using this eyeblaster software from Media Mind:
[Thu Dec 13 16:36:51 2012] [error] [client 123.123.123.123] File does not exist: /public_html/eyeblaster, referer: http://lax1.ib.adnxs.com/if?enc=AAAAAAAAAAAAAAAAAAAAAAAAAGC4Hvs_AAAAAAAAAAAAAAAAAAAAAMqchzp-qp9L_vlliXOoLV2gdMpQAAAAAEMUDABGAQAAQAEAAAIAAADXo0AA-FcCAAAAAQBVU0QAVVNEAKAAWAIAeAAAYk4AAgMCAQUAAIIA5BXJnQAAAAA.&cnd=%21QxtEWwidpzIQ18eCAhgAIPivCTAEOIDwBUABSMACUMOoMFgAYL4FaABwKngAgAH6AYgBAJABAZgBAaABAqgBALABALkBAAAAAAAAAADBAQAAAAAAAAAAyQEgEFk1j_LCP9kBAAAAAAAA8D_gAQA.&udj=uf%28%27a%27%2C+15986%2C+1355445408%29%3Buf%28%27c%27%2C+824221%2C+1355445408%29%3Buf%28%27r%27%2C+4236247%2C+1355445408%29%3B&ccd=%21mgWjMAidpzIQ18eCAhj4rwkgAQ..&vpid=18&referrer=http%3A%2F%2Fwww.ozzu.com%2F&dlo=1
[Thu Dec 13 16:36:56 2012] [error] [client 123.123.123.123] File does not exist: /public_html/eyeblaster
I have just contacted the Media Mind company as well to see if they have any further input. The errors in our logs are at least coming from a few ad servers such as:
lax1.ib.adnxs.com
showads.pubmatic.com
ad.yieldmanager.com
So it is my impression that numerous Media companies are using this Eye Blaster software. I looked more into what Eye Blaster supposedly does, and it is some sort of technology that syncs numerous ads on the page as if they are one big ad. For instance an animation will start in one ad and end in another. Anyway it must be popular as numerous ad companies seem to be using it and as such I would probably have to disabled alot of advertisers.
I think the best way to fix the problem would be to have Media Mind address it, but I am not sure.
Anyway just wanted to share my experience and that this problem seems to be affecting numerous websites.
Related
I am trying to make a script that will compile statistics of my TikTok profile on my WordPress site. TikTok ironically sucks at giving you data about your profile, in my case I can't find a reliable way to check the total views I have on my profile from the native analytics page.
So I figured I would write a script that would take my TikTok page, scan through the html, find each page element that displays the view count on each post thumbnail, and add the value within that element to an array. The write a function that takes care of the math from there.
I thought that would be fairly easy, but I am a victim of the Dunning-Kruger effect as a freshman in Software Engineering.
From what I've looked at the answer seems to lay in jQuery. I have written this so far.
var views= []
jQuery.get("https://www.tiktok.com/#triplicata.html",function() {
//the view count element seems to change on different occasions but the element seems to always be "== $0"
jQuery.each("$0",function(){
var temp = jQuery.text();
views.push(temp);
});
});
When I try to run it in a tester I check the F12 Console and it says something around the lines of:
Access to XMLHttpRequest at 'https://www.tiktok.com/#triplicata.html' from origin 'https://fiddle.jshell.net' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
I can't even test the rest of the code cause I seem to be cut off at the gate just trying to get the HTML in the first place. I don't really know much about jQuery but everything seems to be correct from what I've seen.
I don't know why you would do this, ever heard of viewsource?
<script src="https://cdn.jsdelivr.net/gh/Parking-Master/viewsource/vs.js"></script>
<script>
console.log(getSource('https://example.com'));
</script>
You can't access another website's source code for a good reason. And the jquery get() method is only for JSON files.
I've been wrapping my head around this problem for a couple of days searching for all possible solutions on the forums and online but can't seem to get it working.
I'm calling a script by a link on a "button" to start a script on a server (in HTML):
<a href="#" onClick="RunScript();">
The script code is:
<script type="text/javascript" language="javascript">
function RunScript() {
var objShell = new ActiveXObject("WScript.Shell");
objShell.Run("%comspec% /k my_projects_EN.vbs" "), 1, false;
}
</script>
So why am I using a vbs? What I'm trying to do is create custom pages for each employee. So the vbs is actually checking the computer name and an if clause directs the employee to a custom page. With my basic knowledge of programming and a lot of hours of searching I did not find a better solution for this yet. So I'm trying to make this one to work.
And it does but only if I'm running the script locally (desktop). But as the webpage will be used in an intranet location this script will be on a server. And this is where it became a bit hairy as I can't seem to find the right combination of commands to do so. I already tried pushd for creating a mounted volume or currentDir for setting up the location of script but nothing seems to work completely.
I assume that I'm missing a subroutine for the function as adding anything there just stops the script - but how to go at it is beyond me.
All help is appreciated even if it means I have to bury myself into another program language (not preferred of course).
I am certain that there is a way to solve this other than sending a script to each employee to put on their desktop (each time a new employee comes to work).
Thanks
Edit: I see an additional clarification is in order:
We're creating an intranet webpage as a help for more efficient work for our employees. We're on the same level as the rest so not IT or admin rights guys so we're on our own.
The point is to have a personal page for each employee which can be accessed via the same interface. So a link has to send each person to another page that is why I've created the vbs code which helps with that. Checking several other options this seemed to be the simplest and best one - and it works at least partially. I don't see any security risks as all will be done on each client computer - the files themselves will be located on the server. The script itself does not represent any risk at least not that I would see it - but of course I'm not a specialist.
So in short this is what we're trying to do:
Main page -> link to My_projects button -> start script (located on the same server as the main page) -> determine the client computer name -> redirect to the right webpage.
Sorry for a lack of details, I see that it's sometimes hard to explain exactly what you want if you're not a pro in these things.
Thanks again.
If those computers are physically located at your workplace and you have control over the system, it would be better to tweak DNS redirections on those computers. Otherwise, more general and OS independent solution, would be session, cookie, or token on employee's computer. Still, some kind of authentication other than having one piece of machine, could be more versatile and secure (unless your PCs are 1000 feet underground :-) ).
Edit: What kind of info/data are sent to the server script? Server script runs on server and everything related to "this computer" (e.g. name) is actually referring to the server itself. Thus the script needs some data from the client to recognise his computer.
thanks for the effort
Everything is actually located on the server so the client computer only runs the page or interface which is in \Server\folder\folder for example.
In your browser you open the start page which contains a button with a link to this script (located on the same server).
When the script executes it searches for the computer name and send the user to his personal page:
Set wshShell = CreateObject( "WScript.Shell" )
strComputerName = wshShell.ExpandEnvironmentStrings( "%COMPUTERNAME%" )
On Error Resume Next
'#01 name_surname
If strComputerName = "XXXXXXXX" Then
CreateObject("WScript.Shell").Run """name_surname.html"""
and so on.
And this is all there is. As mentioned before we don't have admin rights to change anything on the client computer. So nothing is being done on the client side other that executing a script located on the server.
What domains/protocols in the img-src directive of the Content-Security-Policy header are required to allow Google AdWords conversion tracking?
From testing, when we call google_trackConversion, it looks like the browser creates an image with a src that follows a chain of 302 redirects between various domains...
www.googleadservices.com ->
googleads.g.doubleclick.net ->
www.google.com ->
www.google.co.uk
The final .co.uk looks suspicious to me. As we're testing from the UK, we're concerned that tracking called from other countries will redirect to other domains.
What is the complete list of domains that we need to open up in order for the tracking to work?
As requested in comments, an example path component of the first request is:
pagead/conversion/979383382/?random=1452934690748&cv=8&fst=1452934690748&num=1&fmt=3&label=jvoMCNP4umIQ1uiA0wM&guid=ON&u_h=1080&u_w=1920&u_ah=1033&u_aw=1920&u_cd=24&u_his=18&u_tz=0&u_java=false&u_nplug=5&u_nmime=7&frm=0&url=https%3A//beta.captevate.com/payment%3Flevel%3Da00&async=1
and repeating the conversion a second time, the path component of the first request is
pagead/conversion/979383382/?random=1452934959209&cv=8&fst=1452934959209&num=1&fmt=3&label=jvoMCNP4umIQ1uiA0wM&guid=ON&u_h=1080&u_w=1920&u_ah=1033&u_aw=1920&u_cd=24&u_his=26&u_tz=0&u_java=false&u_nplug=5&u_nmime=7&frm=0&url=https%3A//beta.captevate.com/payment%3Flevel%3Da00&async=1
I used a free VPN service to connect from a couple of countries (Netherlands and Singapore), and the last redirect doesn't occur: the final request to www.google.com is a 200. However, I obviously haven't tried connected from every country, so my original question stands.
Unfortunately, there aren't many ways around this. Resources require either whitelisting (in the case of remote resources, like this one) or inlining tricks (i.e. nonce or sha256-...) when CSP is active. At the end of the day, though, CSP can probably still make your site safer and protect most resources.
Depending on what you are trying to do, though, you may still be able to achieve your goal.
Here are some options:
Whitelist all images.
Of course, you could simply place a "*" in your img-src directive, but I imagine you already know that and are choosing not to because it defeats CSP's protection for images.
Load the image via alternate means.
If all you are after is specifically locking down images, and, say, don't care so much about XMLHttpRequest, you could load the pixel via POST or even via a <script> tag with a custom type (using the AdWords image tag tracking method). This takes advantage of the fact that Google only needs the browser to complete the HTTP request/response (and redirect) cycle for analytics purposes, and you don't really care about parsing or executing the resulting content, which is a 1x1 transparent pixel anyways. This allows you to lock down your img-src directive (if that is indeed your goal) while still allowing whatever domain Google would like to use for redirects.
I know this only moves your problem, but it's useful if your main threat is malicious images.
Place all Google domains in your img-src.
As suggested below. Header lengths will be a problem (even if the specs say you're fine, implementors are not always so generous), and more importantly, you may encounter spurious failures as Google changes their list of domains, which is certainly not a public or easily noticeable action (besides your ad conversions not coming through!). Since I imagine your job isn't to update that list constantly, you probably don't want to go with this option.
Report failures for a few months and then roll with it.
Because CSP supports reporting URIs and the Content-Security-Policy-Report-Only variant, you can roll it out in report-only mode and wait for reports to come in. If you already have good data about your userbase (and it doesn't change much), this can be a good option - once you see those reports stabilize on a list of domains, seal it in a regular CSP header. Optionally, you can place a reporting URI on the final header to catch any additional failures. The downside of this strategy, of course, is that you don't get protection while in report-only mode, and when you switch to enforcing it, failures cause lost conversion data and you're playing catch up.
Static pixel with reverse proxy
Okay. Well, with the above options not being so great (I admit it), it's time to think outside the box. The problem here is that HTTP optimization techniques applied by Google (sharding/geo-pinning domains) are at odds with good security practice (i.e. CSP). The root cause of the domain ambiguity is the client's geographic location, so why not pin it yourself?
Assuming you have advanced control of your own HTTP server, you could use the static pixel approach for tracking and proxy the request yourself, like so:
User ---> GET http://your-page/
User <--- <html>...
pixel: http://your-page/pixel?some=params
User ---> http://your-page/pixel?some=params
---> fetch http://googleads.g.doubleclick.net/pagead/viewthroughconversion/12345/?some=params
<--- redirect to http://google.com, or http://google.co.uk
User <--- return redirect
Using a static pixel (like approach #2) and putting your proxy, say, in the US or UK should ensure that the source IP is geographically pinned there, and Google's anycast frontend should route you to a stable endpoint. Placing a proxy in between the user and Google also gives you a chance to force-rewrite the redirect if you want to.
To simplify proxy setup (and add some performance spice), you could opt for something like Fastly with Origin Shielding instead of building it yourself. If you add the DoubleClick backend and proxy from there, you can pin the originating requests from the CDN to come only from a certain geographic region. Either way, your user should see a stable set of redirects, and you can trim down that list of Google domains to just img-src 'self' *.google.com *.doubleclick.net *.googleadservices.net.
Edit: It is also worth noting that Fastly (and a growing list of other CDN providers) peer directly with Google Cloud at a few of their Points-of-Presence, offering an optimized path into Google's networks for your proxied traffic.
What are you trying to achieve by locking down your img-src?
CSP is a great security option but most issues are with javascript (which can cause all sorts of issues), css (which can be used to hide or overly elements with injected content) or framing options (which can be used for click-jacking by similarly overlying content). Images are a much smaller risk IMHO.
There are few security risks that I can think of with loading images, which boil down to:
Tracking and the privacy implications of that. Though you are already using Google Adwords which tracks so much. And those that care about this typically block it in their browser.
Loading of insecure content (I presume you are using HTTPS exclusively or this whole conversation is a bit pointless?). This can be remediated with a more loose CSP policy of just https for img-src.
Loading an image and subsequently overlying part of your website with that rogue image. But that requires javascript and/or CSS injection too - which should be locked down in CSP.
Ultimately unless you have a XSS vulnerability people shouldn't be able to easily load images into your pages. And even if they could I think the risks are small.
So, I would be tempted to just have a "img-src 'self' https:;" rather than try any of the other work arounds the others have suggested - all of which have downsides and are not very future proof.
Ultimately if you are that concerned about security of your site that locking down images is a high priority I would question whether you should be running Google Adwords.
However if there is a specific threat you are trying to protect against, while at the same time still allowing Adwords, then provide details of that and there may be other ways around it. At the moment you've asked for a solution to particular problem without necessarily explaining the actual underlying problem which may have solutions other than the one you are asking about.
You can use Wikipedia's List of Google domains. There are many domains unrelated to Google Adwords, but I don't think allowing domains like youtube.com could cause problems.
Currently the list is:
google.com
google.ac
google.ad
google.ae
google.com.af
google.com.ag
google.com.ai
google.al
google.am
google.co.ao
google.com.ar
google.as
google.at
google.com.au
google.az
google.ba
google.com.bd
google.be
google.bf
google.bg
google.com.bh
google.bi
google.bj
google.com.bn
google.com.bo
google.com.br
google.bs
google.bt
google.co.bw
google.by
google.com.bz
google.ca
google.com.kh
google.cc
google.cd
google.cf
google.cat
google.cg
google.ch
google.ci
google.co.ck
google.cl
google.cm
google.cn
g.cn
google.com.co
google.co.cr
google.com.cu
google.cv
google.com.cy
google.cz
google.de
google.dj
google.dk
google.dm
google.com.do
google.dz
google.com.ec
google.ee
google.com.eg
google.es
google.com.et
google.fi
google.com.fj
google.fm
google.fr
google.ga
google.ge
google.gf
google.gg
google.com.gh
google.com.gi
google.gl
google.gm
google.gp
google.gr
google.com.gt
google.gy
google.com.hk
google.hn
google.hr
google.ht
google.hu
google.co.id
google.iq
google.ie
google.co.il
google.im
google.co.in
google.io
google.is
google.it
google.je
google.com.jm
google.jo
google.co.jp
google.co.ke
google.ki
google.kg
google.co.kr
google.com.kw
google.kz
google.la
google.com.lb
google.com.lc
google.li
google.lk
google.co.ls
google.lt
google.lu
google.lv
google.com.ly
google.co.ma
google.md
google.me
google.mg
google.mk
google.ml
google.com.mm
google.mn
google.ms
google.com.mt
google.mu
google.mv
google.mw
google.com.mx
google.com.my
google.co.mz
google.com.na
google.ne
google.com.nf
google.com.ng
google.com.ni
google.nl
google.no
google.com.np
google.nr
google.nu
google.co.nz
google.com.om
google.com.pk
google.com.pa
google.com.pe
google.com.ph
google.pl
google.com.pg
google.pn
google.co.pn
google.com.pr
google.ps
google.pt
google.com.py
google.com.qa
google.ro
google.rs
google.ru
google.rw
google.com.sa
google.com.sb
google.sc
google.se
google.com.sg
google.sh
google.si
google.sk
google.com.sl
google.sn
google.sm
google.so
google.st
google.sr
google.com.sv
google.td
google.tg
google.co.th
google.com.tj
google.tk
google.tl
google.tm
google.to
google.tn
google.com.tr
google.tt
google.com.tw
google.co.tz
google.com.ua
google.co.ug
google.co.uk
google.com
google.com.uy
google.co.uz
google.com.vc
google.co.ve
google.vg
google.co.vi
google.com.vn
google.vu
google.ws
google.co.za
google.co.zm
google.co.zw
admob.com
adsense.com
adwords.com
android.com
blogger.com
blogspot.com
chromium.org
chrome.com
chromebook.com
cobrasearch.com
googlemember.com
googlemembers.com
com.google
feedburner.com
doubleclick.com
igoogle.com
foofle.com
froogle.com
googleanalytics.com
google-analytics.com
googlecode.com
googlesource.com
googledrive.com
googlearth.com
googleearth.com
googlemaps.com
googlepagecreator.com
googlescholar.com
gmail.com
googlemail.com
keyhole.com
madewithcode.com
panoramio.com
picasa.com
sketchup.com
urchin.com
waze.com
youtube.com
youtu.be
yt.be
ytimg.com
youtubeeducation.com
youtube-nocookie.com
like.com
google.org
google.net
466453.com
gooogle.com
gogle.com
ggoogle.com
gogole.com
goolge.com
googel.com
duck.com
googlee.com
googil.com
googlr.com
googl.com
gmodules.com
googleadservices.com
googleapps.com
googleapis.com
goo.gl
googlebot.com
googlecommerce.com
googlesyndication.com
g.co
whatbrowser.org
localhost.com
withgoogle.com
ggpht.com
youtubegaming.com
However, if you want to be sure if that's really all domains, you should ask Google directly.
Unfortunately there is no clean workarround, it only accept wildcards * on the left of the domain.
You can disable this feature on GTM or Universal Analytics but if you do use Google Adds it requires this to calculate the segments to target your add, otherwise your adds will be very expensive (and not targeted)
So: You can check all valid google domains here https://www.google.com/supported_domains
and add them on the white list on img-src and connect-src on your CSP Policy, had cross your fingers that goggle will not add more (youc could monitor this url for changes with any of thee services that does this)
This nightmare ends in mid 2023 when they will deprecate Universal Analytics, GA4 does not use this.
Not sure if you guys are using anything to report the CSP failures, we discovered this service https://report-uri.com/ the free tier gives a reasonable endpoint to report on failures, once we went live we burned our quota in 2 days thought... but it did help to find holes in our CSP.
It did crash our server, we had to increase the HTTP header size once we put twice all google domains
If you search on Google 'new york state beach cleanup', you'll see that the first result is for the website http://najomawi.com, but the title doesn't look quite right for such a site. You'll also notice that if you click this link it instead takes you to a website for Nike shoes. It only happens if you use the Google results link though (and I believe it happens in Bing, Yahoo and others). If you put http://najomawi.com directly into your browser bar, it takes you to the correct site. Confused, I checked the page source code (both with 'View Page Source' and Chrome's inspector) and found this...
<script>
var s=document.referrer;
if(s.indexOf("google")>0 || s.indexOf("bing")>0 || s.indexOf("aol")>0 || s.indexOf("yahoo")>0)
{
self.location='http://www.theredkicks.com';
}
</script>
I have no idea how this got there. It appears in the the head tags of the home page, which is index.html. There is no PHP code, no other JS, nothing other than CSS stylesheets that I am aware of. The entire site is pretty much static HTML and CSS sheets. So how did this get there? And how can I get rid of it?
The JavaScript code is very simple. It just checks if document.referrer contains the name of the most relevant search engines and, if so, redirects the load to another page, in this case, http://www.theredkicks.com.
Your site certainly was hacked somehow or your host provider is not very honest.
Notice that there's nothing attached to the query string in this redirect, so this is not an "affiliate" (wrong) way to make money. The only person that is gaining something with this is the redirect target.
Also, it's very interesting that your page is aparently being processed trought ASP. That is strange, as long as you say that your site is made only by static HTML and CSS.
Look at the cookie, is something like this:
ASPSESSIONIDSATCSAAC=INMLBOADDKNKMPACCK
And also the headers:
HTTP/1.1 200 OK
Date: Fri, 10 Jan 2014 01:30:49 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
Content-Length: 15168
Content-Type: text/html
Cache-control: private
I don't know where you are hosting your site, but you should claim urgent solution for this problem there.
No. This is an injection being done conditionally, pointing to your DNS server/records being compromised.
Your DNS primary and secondary records are being routed through siteprotect.com. I have no idea if you have chosen siteprotect as your DNS handlers, but siteprotect.com doesn't actually resolve at the moment. I also have no idea who "siteprotect" are.
If your actual host is not "siteprotect" and you have not heard of them, reset your DNS records to those of your host and change your passwords etc. If your host is "siteprotect" they may be aware of the problem and working on it.
I've read several of the questions on this but am still a little confused.
For example: OK, I can't post examples because of hyperlink limitations
Here is my exact situation.
I have a site at mydomain.com
One of the pages has an iframe to another page at sub.mydomain.com
I am trying to prepare an onload script that if the page is not in an iframe or the parent domain of the page containing the iframe is not mydomain.com then redirect to mydomain.com.
After the initial permission issues I realised the problem with sub domains counting as separate domains.
One of the posts above says that "could each use either foo.mydomain.com or just mydomain.com"
So I tried (for testing):
onload="document.domain='mydomain.com';alert(parent.location.href);"
This produced the error (http replaced with lar
Error: Permission denied for <http://sub.mydomain.net> (document.domain=<http://mydomain.net>) to get property Location.href from <http://mydomain.net> (document.domain has not been set).
Source File: http://sub.mydomain.net/?pageID=1&framed=1
Line: 1
Removing the alert produces no errors.
Maybe I am going about this the wrong way since I do not need to interact with the parent just read its domain if there is one.
A nice simple top.domain. For read only there must be a way so that people can prevent their own pages being used within other people's sites.
You can't (easily) do this because of security restrictions.
This answer from #2771397 might point you in the right direction.
OK, while looking at the error console I still had open when I got home a wee lightbulb lit up. I am pretty new to javascript (can you tell ;) but I thought "If it has try/catch"...
well here is a hack at least to get the name of the top domain and an example of how I will use it in my site to show content only if the page is a frame in the correct domain.
Firstly the header will have the following partially PHP generated function:
function getParentDomain()
{
try
{
var wibble=top.location.href;
}
catch(err)
{
if (err.message.indexOf('http://mydomain.com')!=-1)
{
createCookie('IAmAWomble','value')
}
}
}
Basically the value will be something based on the PHP session I think. This will be executed at page load.
If the page is not within the proper site or if javascript is not enabled then the cookie will not be created.
PHP will then attempt to read the correct value from the cookie and show the content or an error message as appropriate.
I do see a slight flaw in this for first visit since page load will run after PHP has generated the content but I'm sure I can work around this somehow. I thought I'd post because this is at least what I was initially asking for and that is a way to read the URL of a parent site if it is in a different domain to the site in the frame.
IIUC you want to use the window.parent attribute: “A reference to the parent of the current window or subframe.”
Assumably, window.parent.document.location.host contains the container page URL domain name.