I have tried to getImageData but in the console I see this error:
Uncaught DOMException: Failed to execute 'getImageData' on 'CanvasRenderingContext2D': The canvas has been tainted by cross-origin data.
at HTMLImageElement.img.onload (file:///C:/Users/HOME/Desktop/programmi/HTML-Javascript/caso/graphic/imgData/arc/main.js:16:17)
This is my JavaScript and HTML code:
var canvas = document.getElementById('canvas'),
ctx = canvas.getContext('2d'),
width = canvas.width = 434,
height = canvas.height = 362;
var img = new Image();
img.src = 'https://cdn.pixabay.com/photo/2017/02/26/12/27/oranges-2100108_640.jpg';
//img.crossOrigin = 'Anonymous';
img.onload = function() {
ctx.drawImage(img, 0, 0);
var data = ctx.getImageData(10, 10, 11, 11);
console.log(data)
};
<!DOCTYPE html>
<html>
<head>
<title></title>
</head>
<body>
<canvas id='canvas'></canvas>
<script type="text/javascript" src='main.js'></script>
</body>
</html>
Let me guess ... you're using Google Chrome as your browser. Which is the most problematic browser for this issue.
The issue centres upon what is known as cross-site security. in short, modern browsers are designed to prevent cross-site content from being loaded into a browser, except under special conditions, in part to stop malicious code being injected into web pages that you load.
This extends to images and HTML5 canvas data. The reason being, that some sneaky individuals discovered, early in the days of the HTML5 canvas being provided, that they could use it to provide an image snapshot of your current browser contents, and send that snapshot back to be perused at leisure. If you were browsing your bank's website at the time, and engaging in sensitive financial transactions online, whoever used this technique would quickly be able to find out about your finances, and possibly even pose as you to raid your account.
That's one of the reasons restrictions on cross site content were introduced.
Now, of course, there are legitimate reasons for having cross-site content. Such as having one's fonts or images stored in a separate repository. Trouble is, the cross-site restrictions impact upon legitimate uses as well as illegitimate ones.
Consequently, in order to use images cross-site, you have to do so in conformity with the CORS protocol. But, to do this, you need your images to be provided by a web server that's set up to handle CORS transactions. Simply setting the img.crossOrigin property of an Image object to "anonymous" won't work on its own: you need the server that's sending the images, to be set up to respond to the pre-flight options request that your browser will send, before allowing the image to be treated as acceptable from a security standpoint.
Which means, that to solve your problem, you need either, to:
[1] Install a local web server to perform this task for you - this option involves much tedium reading the web server manual, in order to set the server up properly, and much editing of configuration files;
[2] Write your own server to run under node.js or similar - this option involves even more tedium learning how to write your own web server, and make that server handle CORS transactions properly.
Now, if you're testing code offline in the "old school" manner, Firefox will let you access images from the same directory as your code, via the file:// protocol, and won't complain. Firefox apparently has enough intelligence to realise that an image being extracted from the same directory of your hard drive as your web page, constitutes a same-origin image.
If, however, you're using Google Chrome, it won't. At least, not unless you run it using special command line parameters. Even then, Chrome has a propensity to throw temper tantrums when asked to handle this sort of request. It's an issue I wrestle with frequently, and though some might be tempted to tell me to do my testing in Firefox to avoid these woes, Chrome's internal debugger is, for me at least, far more pleasant to use than Firefox's debugger, which, in my current Firefox installation, crawls like a snail on Mogadon, and exhibits a friendliness and smoothness of use reminiscent of a cocaine-soaked pit viper.
So, if you're using Chrome, because like me, you like its internal debugger, you're stuck with the two options I've given above. Either install a web server (Apache, Nginx, take your pick) or install Node.js and write your own. Neither option is easy.
Related
I'm trying to load an image called "border.bmp" so that I can use it as a texture for WebGL. Here's how I reference the image.
var img;
function preload() {
img = loadImage("assets/border.bmp");
}
I then get this error in the console.
Access to Image at 'file:///C:/P5.js/empty-example/assets/border.bmp' from
origin 'null' has been blocked by CORS policy: Invalid response. Origin
'null' is therefore not allowed access.
What is this error message? What does it mean? How do I load the image?
The comment by gman and the answer by Dale are both correct. I highly recommend you take the time to understand exactly what they're saying before you downvote or dismiss them.
CORS stands for cross-origin resource sharing, and basically it's what allows or prevents JavaScript on one site from accessing stuff on another site. A quick google search of "JavaScript CORS" or just "cors" will give you a ton of information that you should read through.
So if you're getting a CORS error, that means that the site that holds your image is not letting the site that holds your code access the image. In your case, it looks like you're loading stuff from a file: URL, which is not being loaded from a server. That's what the Origin null part of the error means.
So, step one is to listen to gman's comment and run your sketch from a local webserver instead of using a file: URL. His comment already contains links explaining how to do that, or you could use the P5.js web editor or CodePen or any other basic web host.
The most common setup is to include the image files in the same server as the code, so that should be pretty much all you need to do. But if you're storing the images at a different URL than the code, then step 2 is to follow Dale's answer and setup your image server to allow requests from your code server.
While the following doesn't directly answer the OP question, it can help others, like me, that ended up here searching for P5.JS CORS.
To bypass CORS restrictions simply add the blocked URL after https://cors-anywhere.herokuapp.com, i.e.:
var = 'https://cors-anywhere.herokuapp.com/http://blocked.url'
P5.JS Example:
Lets say you want to load a remote mp4 video (it can be any file-type) using P5.JS from a server that doesn't have CORS enabled, for these situations, you can use :
var vid;
function setup() {
vid = createVideo(['https://cors-anywhere.herokuapp.com/http://techslides.com/demos/sample-videos/small.mp4'], vidLoad);
}
// This function is called when the video loads
function vidLoad() {
vid.play();
}
UPDATE:
The demo server of CORS Anywhere (cors-anywhere.herokuapp.com) is meant to be a demo of this project. But abuse has become so common that the platform where the demo is hosted (Heroku) has asked me to shut down the server, despite efforts to counter the abuse (rate limits in #45 and #164, and blocking other forms of requests). Downtime becomes increasingly frequent (e.g. recently #300, #299, #295, #294, #287) due to abuse and its popularity.
To counter this, I will make the following changes:
The rate limit will decrease from 200 (#164) per hour to 50 per hour.
By January 31st, 2021, cors-anywhere.herokuapp.com will stop serving as an open proxy.
From February 1st. 2021, cors-anywhere.herokuapp.com will only serve requests after the visitor has completed a challenge: The user (developer) must visit a page at cors-anywhere.herokuapp.com to temporarily unlock the demo for their browser. This allows developers to try out the functionality, to help with deciding on self-hosting or looking for alternatives.
CORS Stands for Cross-origin resource sharing. By default WebGL will block any resource (images, textures, etc) from any external origin. WebGL thinks the local resource is from an external origin.
You can bypass this by using img.crossOrigin = "";
I'm not an expert in webGL, all this information was found here
If you want to load local images in P5, you can use the drawImage() function of the canvas instead of image() function of the P5.js
let img;
function setup() {
createCanvas(windowWidth,windowHeight);
img = new Image();
img.src = "assets/smiley.png";
}
function draw() {
drawingContext.drawImage(img,mouseX,mouseY);
}
What domains/protocols in the img-src directive of the Content-Security-Policy header are required to allow Google AdWords conversion tracking?
From testing, when we call google_trackConversion, it looks like the browser creates an image with a src that follows a chain of 302 redirects between various domains...
www.googleadservices.com ->
googleads.g.doubleclick.net ->
www.google.com ->
www.google.co.uk
The final .co.uk looks suspicious to me. As we're testing from the UK, we're concerned that tracking called from other countries will redirect to other domains.
What is the complete list of domains that we need to open up in order for the tracking to work?
As requested in comments, an example path component of the first request is:
pagead/conversion/979383382/?random=1452934690748&cv=8&fst=1452934690748&num=1&fmt=3&label=jvoMCNP4umIQ1uiA0wM&guid=ON&u_h=1080&u_w=1920&u_ah=1033&u_aw=1920&u_cd=24&u_his=18&u_tz=0&u_java=false&u_nplug=5&u_nmime=7&frm=0&url=https%3A//beta.captevate.com/payment%3Flevel%3Da00&async=1
and repeating the conversion a second time, the path component of the first request is
pagead/conversion/979383382/?random=1452934959209&cv=8&fst=1452934959209&num=1&fmt=3&label=jvoMCNP4umIQ1uiA0wM&guid=ON&u_h=1080&u_w=1920&u_ah=1033&u_aw=1920&u_cd=24&u_his=26&u_tz=0&u_java=false&u_nplug=5&u_nmime=7&frm=0&url=https%3A//beta.captevate.com/payment%3Flevel%3Da00&async=1
I used a free VPN service to connect from a couple of countries (Netherlands and Singapore), and the last redirect doesn't occur: the final request to www.google.com is a 200. However, I obviously haven't tried connected from every country, so my original question stands.
Unfortunately, there aren't many ways around this. Resources require either whitelisting (in the case of remote resources, like this one) or inlining tricks (i.e. nonce or sha256-...) when CSP is active. At the end of the day, though, CSP can probably still make your site safer and protect most resources.
Depending on what you are trying to do, though, you may still be able to achieve your goal.
Here are some options:
Whitelist all images.
Of course, you could simply place a "*" in your img-src directive, but I imagine you already know that and are choosing not to because it defeats CSP's protection for images.
Load the image via alternate means.
If all you are after is specifically locking down images, and, say, don't care so much about XMLHttpRequest, you could load the pixel via POST or even via a <script> tag with a custom type (using the AdWords image tag tracking method). This takes advantage of the fact that Google only needs the browser to complete the HTTP request/response (and redirect) cycle for analytics purposes, and you don't really care about parsing or executing the resulting content, which is a 1x1 transparent pixel anyways. This allows you to lock down your img-src directive (if that is indeed your goal) while still allowing whatever domain Google would like to use for redirects.
I know this only moves your problem, but it's useful if your main threat is malicious images.
Place all Google domains in your img-src.
As suggested below. Header lengths will be a problem (even if the specs say you're fine, implementors are not always so generous), and more importantly, you may encounter spurious failures as Google changes their list of domains, which is certainly not a public or easily noticeable action (besides your ad conversions not coming through!). Since I imagine your job isn't to update that list constantly, you probably don't want to go with this option.
Report failures for a few months and then roll with it.
Because CSP supports reporting URIs and the Content-Security-Policy-Report-Only variant, you can roll it out in report-only mode and wait for reports to come in. If you already have good data about your userbase (and it doesn't change much), this can be a good option - once you see those reports stabilize on a list of domains, seal it in a regular CSP header. Optionally, you can place a reporting URI on the final header to catch any additional failures. The downside of this strategy, of course, is that you don't get protection while in report-only mode, and when you switch to enforcing it, failures cause lost conversion data and you're playing catch up.
Static pixel with reverse proxy
Okay. Well, with the above options not being so great (I admit it), it's time to think outside the box. The problem here is that HTTP optimization techniques applied by Google (sharding/geo-pinning domains) are at odds with good security practice (i.e. CSP). The root cause of the domain ambiguity is the client's geographic location, so why not pin it yourself?
Assuming you have advanced control of your own HTTP server, you could use the static pixel approach for tracking and proxy the request yourself, like so:
User ---> GET http://your-page/
User <--- <html>...
pixel: http://your-page/pixel?some=params
User ---> http://your-page/pixel?some=params
---> fetch http://googleads.g.doubleclick.net/pagead/viewthroughconversion/12345/?some=params
<--- redirect to http://google.com, or http://google.co.uk
User <--- return redirect
Using a static pixel (like approach #2) and putting your proxy, say, in the US or UK should ensure that the source IP is geographically pinned there, and Google's anycast frontend should route you to a stable endpoint. Placing a proxy in between the user and Google also gives you a chance to force-rewrite the redirect if you want to.
To simplify proxy setup (and add some performance spice), you could opt for something like Fastly with Origin Shielding instead of building it yourself. If you add the DoubleClick backend and proxy from there, you can pin the originating requests from the CDN to come only from a certain geographic region. Either way, your user should see a stable set of redirects, and you can trim down that list of Google domains to just img-src 'self' *.google.com *.doubleclick.net *.googleadservices.net.
Edit: It is also worth noting that Fastly (and a growing list of other CDN providers) peer directly with Google Cloud at a few of their Points-of-Presence, offering an optimized path into Google's networks for your proxied traffic.
What are you trying to achieve by locking down your img-src?
CSP is a great security option but most issues are with javascript (which can cause all sorts of issues), css (which can be used to hide or overly elements with injected content) or framing options (which can be used for click-jacking by similarly overlying content). Images are a much smaller risk IMHO.
There are few security risks that I can think of with loading images, which boil down to:
Tracking and the privacy implications of that. Though you are already using Google Adwords which tracks so much. And those that care about this typically block it in their browser.
Loading of insecure content (I presume you are using HTTPS exclusively or this whole conversation is a bit pointless?). This can be remediated with a more loose CSP policy of just https for img-src.
Loading an image and subsequently overlying part of your website with that rogue image. But that requires javascript and/or CSS injection too - which should be locked down in CSP.
Ultimately unless you have a XSS vulnerability people shouldn't be able to easily load images into your pages. And even if they could I think the risks are small.
So, I would be tempted to just have a "img-src 'self' https:;" rather than try any of the other work arounds the others have suggested - all of which have downsides and are not very future proof.
Ultimately if you are that concerned about security of your site that locking down images is a high priority I would question whether you should be running Google Adwords.
However if there is a specific threat you are trying to protect against, while at the same time still allowing Adwords, then provide details of that and there may be other ways around it. At the moment you've asked for a solution to particular problem without necessarily explaining the actual underlying problem which may have solutions other than the one you are asking about.
You can use Wikipedia's List of Google domains. There are many domains unrelated to Google Adwords, but I don't think allowing domains like youtube.com could cause problems.
Currently the list is:
google.com
google.ac
google.ad
google.ae
google.com.af
google.com.ag
google.com.ai
google.al
google.am
google.co.ao
google.com.ar
google.as
google.at
google.com.au
google.az
google.ba
google.com.bd
google.be
google.bf
google.bg
google.com.bh
google.bi
google.bj
google.com.bn
google.com.bo
google.com.br
google.bs
google.bt
google.co.bw
google.by
google.com.bz
google.ca
google.com.kh
google.cc
google.cd
google.cf
google.cat
google.cg
google.ch
google.ci
google.co.ck
google.cl
google.cm
google.cn
g.cn
google.com.co
google.co.cr
google.com.cu
google.cv
google.com.cy
google.cz
google.de
google.dj
google.dk
google.dm
google.com.do
google.dz
google.com.ec
google.ee
google.com.eg
google.es
google.com.et
google.fi
google.com.fj
google.fm
google.fr
google.ga
google.ge
google.gf
google.gg
google.com.gh
google.com.gi
google.gl
google.gm
google.gp
google.gr
google.com.gt
google.gy
google.com.hk
google.hn
google.hr
google.ht
google.hu
google.co.id
google.iq
google.ie
google.co.il
google.im
google.co.in
google.io
google.is
google.it
google.je
google.com.jm
google.jo
google.co.jp
google.co.ke
google.ki
google.kg
google.co.kr
google.com.kw
google.kz
google.la
google.com.lb
google.com.lc
google.li
google.lk
google.co.ls
google.lt
google.lu
google.lv
google.com.ly
google.co.ma
google.md
google.me
google.mg
google.mk
google.ml
google.com.mm
google.mn
google.ms
google.com.mt
google.mu
google.mv
google.mw
google.com.mx
google.com.my
google.co.mz
google.com.na
google.ne
google.com.nf
google.com.ng
google.com.ni
google.nl
google.no
google.com.np
google.nr
google.nu
google.co.nz
google.com.om
google.com.pk
google.com.pa
google.com.pe
google.com.ph
google.pl
google.com.pg
google.pn
google.co.pn
google.com.pr
google.ps
google.pt
google.com.py
google.com.qa
google.ro
google.rs
google.ru
google.rw
google.com.sa
google.com.sb
google.sc
google.se
google.com.sg
google.sh
google.si
google.sk
google.com.sl
google.sn
google.sm
google.so
google.st
google.sr
google.com.sv
google.td
google.tg
google.co.th
google.com.tj
google.tk
google.tl
google.tm
google.to
google.tn
google.com.tr
google.tt
google.com.tw
google.co.tz
google.com.ua
google.co.ug
google.co.uk
google.com
google.com.uy
google.co.uz
google.com.vc
google.co.ve
google.vg
google.co.vi
google.com.vn
google.vu
google.ws
google.co.za
google.co.zm
google.co.zw
admob.com
adsense.com
adwords.com
android.com
blogger.com
blogspot.com
chromium.org
chrome.com
chromebook.com
cobrasearch.com
googlemember.com
googlemembers.com
com.google
feedburner.com
doubleclick.com
igoogle.com
foofle.com
froogle.com
googleanalytics.com
google-analytics.com
googlecode.com
googlesource.com
googledrive.com
googlearth.com
googleearth.com
googlemaps.com
googlepagecreator.com
googlescholar.com
gmail.com
googlemail.com
keyhole.com
madewithcode.com
panoramio.com
picasa.com
sketchup.com
urchin.com
waze.com
youtube.com
youtu.be
yt.be
ytimg.com
youtubeeducation.com
youtube-nocookie.com
like.com
google.org
google.net
466453.com
gooogle.com
gogle.com
ggoogle.com
gogole.com
goolge.com
googel.com
duck.com
googlee.com
googil.com
googlr.com
googl.com
gmodules.com
googleadservices.com
googleapps.com
googleapis.com
goo.gl
googlebot.com
googlecommerce.com
googlesyndication.com
g.co
whatbrowser.org
localhost.com
withgoogle.com
ggpht.com
youtubegaming.com
However, if you want to be sure if that's really all domains, you should ask Google directly.
Unfortunately there is no clean workarround, it only accept wildcards * on the left of the domain.
You can disable this feature on GTM or Universal Analytics but if you do use Google Adds it requires this to calculate the segments to target your add, otherwise your adds will be very expensive (and not targeted)
So: You can check all valid google domains here https://www.google.com/supported_domains
and add them on the white list on img-src and connect-src on your CSP Policy, had cross your fingers that goggle will not add more (youc could monitor this url for changes with any of thee services that does this)
This nightmare ends in mid 2023 when they will deprecate Universal Analytics, GA4 does not use this.
Not sure if you guys are using anything to report the CSP failures, we discovered this service https://report-uri.com/ the free tier gives a reasonable endpoint to report on failures, once we went live we burned our quota in 2 days thought... but it did help to find holes in our CSP.
It did crash our server, we had to increase the HTTP header size once we put twice all google domains
I have the same exact problem as the user in this question. I am using this code:
function imgData(img) {
var src_canvas = document.createElement('canvas');
src_canvas.width = this.width;
src_canvas.height = this.height;
var src_ctx = src_canvas.getContext('2d');
src_ctx.drawImage(img, 0, 0);
var src_data = src_ctx.getImageData(0, 0, this.width, this.height).data;
}
And it always generates this error on Chrome:
Uncaught SecurityError: An attempt was made to break through the security policy of the user agent. Resources.js:27
Unable to get image data from canvas because the canvas has been tainted by cross-origin data. Resources.js:27
I am extremely new to JavaScript and have no idea what the issue is here. The internet seems to be at a loss today as well so I am now here. The answers to the question I referenced refer to the issue being he was trying to use images from outside sources (outside of his file system). The problem is, I'm not. I am just running the files in Chrome on my file system. The image is also in my file system and not anywhere I would think would merit a security error. Any thoughts? Am I being a moron?
Are you accessing your HTML / JavaScript as files directly from your filesystem, instead of behind a webserver? If so, try the latter, most browsers will have cross-domain security issues if you ask them to grab data outside of a webserver (even if they're in your own filesystem).
I have seen a number of questions that don't answer this, is it possible to check someones bandwidth using java script and load specific content based on it?
The BBC seem to give me low quality images when using my mobile and in the middle of nowhere.
by the looks of this this cool service does this and its a CDN so it could be server side.
http://www.resrc.it/docs/
Does anyone know how they do it? or how I could do it using asp.net or javascript, or an community opensource plug in.
I think it may be possible with https://github.com/yahoo/boomerang/ but not sure this is its true purpose.
Basically you do this like this:
Start a timer
Load an fixed size file e.g a image through an ajax call
Stop the timer
Take some samples and compute the average badwidth
Somethign like this could work:
//http://upload.wikimedia.org/wikipedia/commons/5/51/Google.png
//Size = 238 KB
function measureBW(cnt, cb) {
var start = new Date().getTime();
var bandwidth;
var i = 0;
(function rec() {
var xmlHttp = new XMLHttpRequest();
xmlHttp.open('GET', 'http://upload.wikimedia.org/wikipedia/commons/5/51/Google.png', true);
xmlHttp.onreadystatechange = function () {
if (xmlHttp.readyState == 4) {
var x = new Date().getTime() - start;
bw = Number(((238 / (x / 1000))));
bandwidth = ((bandwidth || bw) + bw) / 2;
i++;
if (i < cnt) {
start = new Date().getTime();rec();
}
else cb(bandwidth.toFixed(0));
}
};
xmlHttp.send(null);
})();
}
measureBW(10, function (e) {
console.log(e);
});
Not that var xmlHttp = new XMLHttpRequest(); won't work on all browsers, you should check for the UserAgent and use the right one
And of course its just an estimated value.
Heres a JSBin example
Start a timer.
Send a AJAX request to your server, requesting a file of known size.
When the AJAX request's done loading, stop the timer, and calculate the bandwidth from the passed time and file size.
The problem with JavaScript is that users can disable it. (Which is more common on phones, that happen to be better off with smaller images)
I've knocked this up based on timing image downloads (ref: http://www.ehow.com/how_5804819_detect-connection-speed-javascript.html)
Word of warning though:
It says my speed is 1.81Mbps,
But according to SpeedTest.Net my speeds are this:
The logic of timing the download seems right but not sure if it's accurate?
Well, like I said in my comments, you can choose 2 approaches:
1) You are in the context of a mobile app, then you can query the technology used by the device directly so you can notify the server directly what type (and size) of content you area able to render. I think phone gap can help you with accessing some of the native mobile API's using JavaScript.
2) The server-timer thing. You can "serve" some files yourself, lets say you have a magic file in your landing page, that, as soon as the client request the file, you grab this HTTP request with a custom handler. You "manually" serve the file by writing to the output stream, and you measure the bytes send and the time it took to reach the EOF, then you can somehow measure the bandwith. Combine this with the session cookie and you have this information per connected browser.
While this isn't an answer, it may be important to note that measuring bandwidth isn't always reliable.
http://www.smashingmagazine.com/2013/01/09/bandwidth-media-queries-we-dont-need-em/
To paraphrase the above:
...the number of bits downloaded divided by the time it took to download them...is true when you download a large file over a single warmed-up TCP connection. That is rarely the case.
Typical page load scenario:
Initial HTML page is downloaded using slow-start mechanism, so measurement will significantly underestimate the available bandwidth
CSS and JavaScript external resources are loaded -- a collection of new TCP connections, all in their slow-start phase, and they are not all necessarily to the same destination server
Images are loaded -- multiple connections, each one downloading a resource. The problem is that these connections are not always in the same phase of their life cycle. Some might be in the slow-start phase; some may have suffered a packet loss and, thus, reduced their window and the bandwidth they are trying to fill; and some might be warmed-up TCP connections, ready to fill the bandwidth. These TCP connections are not necessarily all to the same destination server, and the bandwidth towards the various destination servers might be different between one another.
So, estimating bandwidth is possible, but it is far from simple, and it is possible only for certain phases of the page-loading process. And because having several TCP connections to various destination servers is common (for example, a CDN could host the image resources of a Web page), we cannot really tell what is the bandwidth we want to measure.
Since this is an older question, the alternative suggestion at the end of the article is to consider the more recent srcset attribute for responsive imagery, which lets the browser decide which asset to load based on whatever it knows (which should be more than us). It sounds like it's weighted more towards just determining resolution, but maybe it'll get smarter as support goes up.
I have released BwCh which is an open-source JavaScript API to detect bandwidth for web-based environments
It is built with ES2015. It uses some of the latest JavaScript innovation (window.navigator.connection currently supported in Chrome 48+ for Android as of April 2016) in order to provide a flexible method to detect bandwidth for both mobile and desktop devices. It fallbacks/complements to image pre-loading to detect bandwidth where those newest API are not available.
I am using a mobile network based internet connection and the source code is being rewritten when they present the site to the end user.
In the localhost my website looks fine, but when I browse the site from the remote server via the mobile network connection the site looks bad.
Checking the source code I found a piece of JavaScript code is being injected to my pages which is disabling the some CSS that makes site look bad.
I don't want image compression or bandwidth compression instead of my well-designed CSS.
How can I prevent or stop the mobile network provider (Vodafone in this case) from proxy injecting their JavaScript into my source code?
You can use this on your pages. It still compresses and put everything inline but it wont break scripts like jquery because it will escape everything based on W3C Standards
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
On your server you can set the cahce control
"Cache-Control: no-transform"
This will stop ALL modifications and present your site as it is!
Reference docs here
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.5
http://stuartroebuck.blogspot.com/2010/08/official-way-to-bypassing-data.html
Web site exhibits JavaScript error on iPad / iPhone under 3G but not under WiFi
You're certainly not the first. Unfortunately many wireless ISPs have been using this crass and unwelcome approach to compression. It comes from Bytemobile.
What it does is to have a proxy recompress all images you fetch smaller by default (making image quality significantly worse). Then it crudely injects a script into your document that adds an option to load the proper image for each recompressed image. Unfortunately, since the script is a horribly-written 1990s-style JS, it craps all over your namespace, hijacks your event handlers and stands a high chance of messing up your own scripts.
I don't know of a way to stop the injection itself, short of using HTTPS. But what you could do is detect or sabotage the script. For example, if you add a script near the end of the document (between the 1.2.3.4 script inclusion and the inline script trigger) to neuter the onload hook it uses:
<script type="text/javascript">
bmi_SafeAddOnload= function() {};
</script>
then the script wouldn't run, so your events and DOM would be left alone. On the other hand the initial script would still have littered your namespace with junk, and any markup problems it causes will still be there. Also, the user will be stuck with the recompressed images, unable to get the originals.
You could try just letting the user know:
<script type="text/javascript">
if ('bmi_SafeAddOnload' in window) {
var el= document.createElement('div');
el.style.border= 'dashed red 2px';
el.appendChild(document.createTextNode(
'Warning. Your wireless ISP is using an image recompression system '+
'that will make pictures look worse and which may stop this site '+
'from working. There may be a way for you to disable this feature. '+
'Please see your internet provider account settings, or try '+
'using the HTTPS version of this site.'
));
document.body.insertBefore(el, document.body.firstChild);
}
</script>
I'm suprised no one has put this as answer yet. The real solution is:
USE HTTPS!
This is the only way to stop ISPs (or anyone else) from inspecting all your traffic, snooping on your visitors, and modifying your website in flight.
With the advent of Let's Encrypt, getting a certificate is now free and easy. There's really no reason not to use HTTPS in this day and age.
You should also use a combination of redirects and HSTS to keep all of your users on HTTPS.
You provider might have enabled a Bytemobile Unison feature called "clientless personalization". Try accessing the fixed URL http://1.2.3.50/ups/ - if it's configured, you will end up on a page which will offer you to disable all feature you don't like. Including Javascript injection.
Good luck!
Alex.
If you're writing you own websites, adding a header worked for me:
PHP:
Header("Cache-Control: no-transform");
C#:
Response.Cache.SetNoTransforms();
VB.Net:
Response.Cache.SetNoTransforms()
Be sure to use it before any data has been sent to the browser.
I found a trick. Just add:
<!--<![-->
After:
<html>
More information (in German):
http://www.programmierer-forum.de/bmi-speedmanager-und-co-deaktivieren-als-webmaster-t292182.htm#3889392
BMI js it's not only on Vodafone. Verginmedia UK and T-Mobile UK also gives you this extra feature enabled as default and for free. ;-)
In T-mobile it's called "Mobile Broadband Accelerator"
You can Visit:
http://accelerator.t-mobile.co.uk
or
http://1.2.3.50/
to configure it.
In case the above doesn't apply to you or for some reason it's not an option
you could potentially set-up your local proxy (Polipo w/wo Tor)
There is also a Firefox addon called "blocksite"
or as more drastic approach reset tcp connection to 1.2.3.0/24:80 on your firewall.
But unfortunately that wouldn't fix the damage.
Funny enough T-mobile and Verginmedia mobile/broadband support is not aware about this feature! (2011.10.11)
PHP: Header("Cache-Control: no-transform"); Thanks!
I'm glad I found this page.
That Injector script was messing up my php page source code making me think I made an error in my php coding when viewing the page source. Even though the script was blocked with firefox NoScript add on. It was still messing up my code.
Well, after that irritating dilemma, I wanted to get rid of it completely and not just block it with adblock or noscript firefox add ons or just on my php page.
STOP http:// 1.2.3.4 Completely in Firefox: Get the add on: Modify
Headers.
Go to the modify header add on options... now on the Header Tab.
Select Action: Choose ADD.
For Header Name type in: cache-control
For Header Value type in: no-transform
For Comment type in: Block 1.2.3.4
Click add... Then click Start.
The 1.2.3.4 script will not be injected into any more pages! yeah!
I no longer see 1.2.3.4 being blocked by NoScript. cause it's not there. yeah.
But I will still add: PHP: Header("Cache-Control: no-transform"); to my php pages.
If you are getting it on a site that you own or are developing, then you can simply override the function by setting it to null. This is what worked for me just fine.
bmi_SafeAddOnload = null;
As for getting it on other sites you visit, then you could probably open the devtools console and just enter that into there and wipe it out if a page is taking a long time to load. Haven't yet tested that though.
Ok nothing working to me. Then i replace image url every second because when my DOM updates, the problem is here again. Other solution is only use background style auto include in pages. Nothing is clean.
setInterval(function(){ imageUpdate(); }, 1000);
function imageUpdate() {
console.log('######imageUpdate');
var image = document.querySelectorAll("img");
for (var num = 0; num < image.length; num++) {
if (stringBeginWith(image[num].src, "http://1.1.1.1/bmi/***yourfoldershere***")) {
var str=image[num].src;
var res=str.replace("http://1.1.1.1/bmi/***yourfoldershere***", "");
image[num].src = res;
console.log("replace"+str+" by "+res);
/*
other solution is to push img src in data-src and push after dom loading all your data-src in your img src
var data-str=image[num].data-src;
image[num].src = data-str;
*/
}
}
}
function stringEndsWith(string, suffix) {
return string.indexOf(suffix, string.length - suffix.length) !== -1
}
function stringBeginWith(string, prefix) {
return string.indexOf(prefix, prefix.length-string.length) !== -1
}
An effective solution that I found was to edit your hosts file (/etc/hosts on Unix/Linux type systems, C:\Windows\System32\drivers\etc on Windows) to have:
null 1.2.3.4
Which effectively maps all requests to 1.2.3.4 to null. Tested with my Crazy Johns (owned by Vofafone) mobile broadband. If your provider uses a different IP address for the injected script, just change it to that IP.
Header("Cache-Control: no-transform");
use the above php code in your each php file and you will get rid of 1.2.3.4 code injection.
That's all.
I too was suffering from same problem, now it is rectified. Give a try.
I added to /etc/hosts
1.2.3.4 localhost
Seems to have fixed it.