This question has been asked a lot, but I just don't understand why this is happening to me.
Basically, I have a canvas, and an image, and when I try to do this:
var canvas = document.getElementById('somecanvas');
var ctx = canvas.getContext('2d');
var someimage = document.createElement('img');
someimage.setAttribute('src', 'img/someimage.png');
someimage.onload = function(){
ctx.drawImage(someimage, 0, 0, canvas.width, canvas.height);
data = ctx.getImageData(0,0,canvas.width,canvas.height);
}
I get the unsightly:
"Uncaught DOMException: Failed to execute 'getImageData' on 'CanvasRenderingContext2D': The canvas has been tainted by cross-origin data.
at HTMLImageElement.someimage.onload"
I should mention that I'm fairly new to pogramming, and even moreso to javascript. Should this be happening when I'm running it from file:\\?
I haven't found anyone having the exact same problem as me, and the explanations people got for the other questions had to do with the server the images were hosted on. But in this case it isn't hosted on a server, so I'm confused as to how it all works. Or rather, doesn't work.
For security reasons, many browsers will complain if you try to do certain things (canvas image drawing among them), if you use a file:// URL.
You really should serve both the page and the images from a local HTTP server in order to avoid those restrictions.
Ah, you've hit the CORS restriction, and I'm guessing here that you're encountering this in Google Chrome, which is notorious for being the most aggressive implementor of this. I've seen this a LOT.
CORS is a protocol, to prevent cross-origin content being inserted into a web page. It not only affects script files (as you might expect, because you don't want anyone to be able to inject malicious scripts into your web page), but also affects such resources as images and fonts.
The reason it affects images, is because malicious individuals discovered that they could use the HTML5 canvas object to copy the contents of your web page to a PNG file, and hoover personal data from that at will.You can imagine what would happen if you were engaging in Internet banking transactions whilst this happened to you!
But, and this is the annoying part you're encountering, stopping such malicious activity also impinges on legitimate uses of cross-origin resources (e.g, keeping all your images in a separate repository).
So how do you get around this?
On Firefox, you shouldn't have a problem. Firefox applies some intelligence to the matter, and recognises that images coming from the same file:// specification as your web page, are not actually cross-origin. It lets these through, so long as they're in the same directory on your hard drive as your web page.
Chrome, on the other hand, is much less permissive. It treats all such accesses as cross-origin, and implements security shutdowns the moment you try using getImageData() and putImageData() on a canvas.
There is a workaround, if you don't want to go to the trouble of installing and configuring your own local web server, but still want to use Chrome and its nice, friendly debugger. You have to create a shortcut, that points to your Chrome executable and runs it when you double click on it, but which starts Chrome up using a special command line flag:
--allow-file-access-from-files
Save this shortcut, labelling it something like "Chrome Debug Version" to remind you ONLY to use this for debugging your own code (you should never access the Internet proper with weakened security!), and you should be able to debug your code without issues from this point on.
But, if you're going to be doing a lot of debugging of this sort, the better long-term solution, is to install a web server, and configure it to serve up code from your desired directories every time you use the "localhost" URL. This is, I know, tedious and time consuming, and distracts from your desire to get coding, but once it's done, it's done and dusted, and solves your woes permanently.
If you really want to put your programming skills to the test, you could even write your own web server to do the job, using something like the node.js server side framework, but if you're new to JavaScript, that's a task you're better leaving until you have a lot more experience! But once your skills reach that point, doing that is a nice educational exercise that will also solve some of your other woes, once you've worked out how a web server works.
If you run with an established web server, you then, of course, have the fun of deciding which one involves the least headaches. Apache is powerful, but big. Hiawatha is both lightweight and secure, and would be my first choice if it wasn't for the fact that a 64-bit version is still not available (sigh), because the 32-bit version that ran on my old XP box was a joy to use. Nginx I know little about, but some people like it. Caveat emptor and all that.
Related
The website testmysite claims that the website I'm working on is very slow, namely 9.8 seconds on 4g which is ridiculous.
My webapp requests geolocation first, but denies it immediately, so this is not where the slowness comes from.
The site is server-rendered, and sends scripts to the client to optimise.
The bundle analyzer is very dubious.
It claims that my total bundle size would be 750kb, but I strongly doubt that this is the case.
Even if it turns out that the testmysite is not a reliable source, I'd like to know why it says that my website is so slow, and what I can actually do to improve.
My vendor node modules are chunk split, because I want the browser to cache them individually.
My website is deployed here: https://zwoop-website-v001.herokuapp.com
Note that loading may take some time because I use the free service and the server falls asleep often.
Edit: my performance tab shows the following in Chrome:
I have really zero idea why that website says that my site is slow...
But if the measurement is used by search engines, then I do care.
* Answer: *
I just voted to re-open this question as #slebetman has helped me to troubleshoot some issues and I'd just like to formulate an answer.
First thing to note is that the free Heroku server that I run in production is located in Europe (this was an option you could choose), and it is unclear where the server from testmysite is located. #slebetman was located in East-Asia when running his test.
#slebetman mentioned that the network tab indicated for him a very slow load time for loading fonts (woff2), of about 2 seconds. This didn't occur for me, but as it turns out, these are font-awesome icons that are loaded from a CDN.
So while there is the logical thought of looking at performance in terms of script loading, interpretation and rendering, there may additionally be a latency issue related to third-party resources that are downloaded from another server. Even when your website is fast, you don't actually know if you have imported some module that requests additional resources.
Either of these things can be tracked through the performance or network tab of Google Chrome. An additional tip is to mimick a slow network (although I don't think this would actually track all problems that may occur on the network in terms of redirects, DNS resolutions etc as #slebetman mentions could be the case as well).
First thing to note is that the free Heroku server that I run in production is located in Europe (this was an option you could choose), and it is unclear where the server from testmysite is located.
#slebetman was located in East-Asia when running his test.
#slebetman mentioned that the network tab indicated for him a very slow load time for loading fonts (woff2), of about 2 seconds. This didn't occur for me, but as it turns out, these are font-awesome icons that are loaded from a CDN.
So while there is the logical thought of looking at performance in terms of script loading, interpretation and rendering, there may additionally be a latency issue related to third-party resources that are downloaded from another server. Even when your website is fast, you don't actually know if you have imported some module that requests additional resources.
Either of these things can be tracked through the performance or network tab of Google Chrome. An additional tip is to mimick a slow network (although I don't think this would actually track all problems that may occur on the network in terms of redirects, DNS resolutions etc as #slebetman mentions could be the case as well).
First some backstory:
We have a website that includes a Google Map in the usual way:
<script src="https://maps.googleapis.com/maps/api/js?v=....></script>
Then there is some of our javascript code that initializes the map. Now suddenly yesterday pages started to load but then freezed up entirely. In Chrome this resulted in having to force quit and restart. Firefox was smarter and allowed the user to stop script execution.
Now after some debugging, I found out that a previous developer had included the experimental version of the Google Maps API:
https://maps.googleapis.com/maps/api/js?v=3.exp
So it's likely that something has changed on Google's servers (which is completely understandable). This has uncovered a bug on our end, and both in combination caused the script to hang and freeze up the website and the browser.
Now ok, bug is found and fixed, no big harm done.
And now the actual question:
Is it possible to somehow sandbox these external script references so that they cannot crash my main site. I am loading a decent amount of external javascript files (tracking, analytics, maps, social) from their own servers.
However such code could change at all times and could have bugs that freeze my site. How can I protect my site? Is there a way to maybe define maximum allowable execution time?
I'm open to all kinds of suggestions.
It actually doesn't matter where the scripts are coming from - whether an external source or your own server. Either way they are run in the clients browser. And that makes it quite difficult to achieve your desired sandbox behavior.
You can get a sandbox inside your DOM with the usage of iframes and the keyword "sandbox". This way the content of this iframe is independent from the DOM of your actual website and you can include scripts independent as well. But this is beneficial mainly for security. I am not sure how it would result regarding the overall stability when one script has a bug like an endless loop or similar. But imho this is worth a try.
For further explanation see: https://www.html5rocks.com/en/tutorials/security/sandboxed-iframes/
In my web page, I have to start a desktop application on the client's computer if it's installed. Any idea how I can do this?
If the application is MS Office or Adobe Reader, I know how to start them, but the application I want to start is a custom application. You can not find it on the internet.
How can I open the application?
Basically it's not possible to achieve unless an application registers a protocol that will trigger it. If it does that all you need to do is to provide a link using this protocol
yourcustomapp://some.parameters
Another way the 3rd party app can integrate with the browser is if it hooks to it as a plugin. This is how flash apps work etc.
If the app you are trying to launch does not support something like that it's going to be close to impossible to achieve what you want.
The browser sandbox prohibits you from executing local resources, for good reason - to thwart a website destroying your box with malicious code. I've been researching the same functionality.
The only solution I've found is to build an extension in Mozilla Firefox which can launch your app. Extensions live outside the sandbox so they can execute local resources. See this page for how to do that. You may be able to do it cross-browser using crossrider, though I haven't had success with that yet.
You could alternatively build a thick client populated from a web service, and launched from the browser through an extension as mentioned above. This is what I'm doing to get around the sandbox. I'm using local XUL for this.
See my question for additional discussion.
First off - you can't do it using javascript in any sort of a portable mechanism.
If the application is ms office or adobe reader,I know how to startup them
No you don't - you know how to send a document, which the browser associates with these applications and invokes them supplying the name of the local copy of the response. You can't just start the programs.
You just need to do the same for your app - invent a new mime type (the major type would be 'application' and by convention, non-standard minor types are prefixed with 'x-', so you might use application/x-hguser) then associate that mimetype with the relevant program browser side.
i.e: You need to explicitly configure each browser
I already encouter that problem in some complex production environnements.
I do the trick using the following code :
function launch(p_app_path)
{
var oShell = new ActiveXObject("WScript.Shell");
oShell.Run('"' + p_app_path + '"', 1);
}
In IE options > Security > Customize the level > ActiveX controls and plugins > Initialization and script ActiveX controls not marked as safe for scripting, set the value to Ask or Active.
It isn't a security problem when your website is enclosed into a specific security context.
And as they say, it's not worth it to build a gas plant.
JavaScript alone can't do this. (No, not even with MS Office or Adobe Reader.) Thankfully.
There are a number of old ways, including using ActiveX, which may work for your needs. As others have pointed out while typing this, you can customize responses based on the mime type or the protocol, etc.
Any way you look at it, you're going to need control over the end users' browser. If you're in a close environment where you can dictate policy (users must use a specific browser, with a specific configuration), then you're going to need to do that. For an open environment with no control over the end users, you're out of luck.
I'm actually having a lot of success right now with SiteFusion. It's a PHP client/server application framework that serves out XUL/JavaScript applications from a server deamon running in Apache. You access applications from a very thin client in XULRunner, or potentially off a web page using extensions. Clients can execute on any platform, and they're outside of the browser sandbox so you can access local resources such as executables. It'a a fairly elegant solution, their website provides great examples and documentation, and their forum is very responsive. I actually found a minor bug in passing arguments to local executables, posted a question about the forum, and it was fixed by the chief developer in under 15 minutes. Very impressive, overall!
I would like to know how browser allow viruses to pass through to our computers. Response we receive is a text response.. Only executable thing in the response is JavaScript which does not have much privileges, what makes browser favor certain files to be passed to computer?
The short list:
Browser plugins. ActiveX* in general and Flash in particular are notorious for having holes.
Buffer overflows. Forming either HTML pages or Javascript in a specific way can lead to being able to write anything you want into memory... which can then lead to remote execution.
Other errors. I recall bugs in the past where the browser could be tricked into downloading files into a known location, then execute them.
*Google is working on expanding this particular kind of hole to other browsers with Native Client.
Things like ActiveX controls allow native code to be executed on local machines with essentially full privileges. Most viruses propagate through known security holes in unpatched browsers and don't use Javascript directly.
Browser bugs and misconfiguration can allow sites that should be in the "Internet" (secure) security zone execute code as if they were trusted. They can then use ActiveX components to install malware.
Exploiting software bugs. Commonly, when rendering images, interpreting html/css/javascript, loading ActiveX components or Flash files.
Once a bug is exploited, the procedure is to inject "shell code" (a chunk of native compiled code), into the process memory to get executed.
Recently I have been having issues with Firefox 3 on Ubuntu Hardy Heron.
I will click on a link and it will hang for a while. I don't know if its a bug in Firefox 3 or a page running too much client side JavaScript, but I would like to try and debug it a bit.
So, my question is "is there a way to have some kind of process explorer, or task manager sort of thing for Firefox 3?"
I would like to be able to see what tabs are using what percent of my processor via the JavaScript on that page (or anything in the page that is causing CPU/memory usage).
Does anybody know of a plugin that does this, or something similar? Has anyone else done this kind of inspection another way?
I know about FireBug, but I can't imagine how I would use it to finger which tab is using a lot of resources.
Any suggestions or insights?
It's probably the awesome firefox3 fsync "bug", which is a giant pile of fail.
In summary
Firefox3 saves its bookmarks and history in an SQLite database
Every time you load a page it writes to this database several times
SQLite cares deeply that you don't lose your bookmarks, so each time it writes, instructs the kernel to flush it's database file to disk and ensure that it's fully written
Many variants of linux, when told to flush like that, flush EVERY FILE. This may take up to a minute or more if you have background tasks doing any kind of disk intensive stuff.
The kernel makes firefox wait while this flush happens, which locks up the UI.
So, my question is, is there a way to have some kind of process explorer, or task manager sort of thing for Firefox 3?
Because of the way Firefox is built this is not possible at the moment. But the new Internet Explorer 8 Beta 2 and the just announced Google Chrome browser are heading in that direction, so I suppose Firefox will be heading there too.
Here is a post ( Google Chrome Process Manager ),by John Resig from Mozilla and jQuery fame on the subject.
There's a thorough discussion of this that explains all of the fsync related problems that affected pre-3.0 versions of FF. In general, I have not seen the behaviour since then either, and really it shouldn't be a problem at all if your system isn't also doing IO intensive tasks. Firebug/Venkman make for nice debuggers, but they would be painful for figuring out these kinds of problems for someone else's code, IMO.
I also wish that there was an easy way to look at CPU utilization in Firefox by tab, though, as I often find myself with FF eating 100% CPU, but no clue which part is causing the problem.
XUL Profiler is an awesome extension that can point out extensions and client side JS gone bananas CPU-wise. It does not work on a per-tab basis, but per-script (or so). You can normally relate those .js scripts to your tabs or extensions by hand.
It is also worth mentioning that Google Chrome has built-in a really good task manager that gives memory and CPU usage per tab, extension and plugin.
[XUL Profiler] is a Javascript profiler. It
shows elapsed time in each method as a
graph, as well as browser canvas zones
redraws to help track down consuming
CPU chunks of code.
Traces all JS calls and paint events
in XUL and pages context. Builds an
animation showing dynamically the
canvas zones being redrawn.
As of FF 3.6.10 it is not up to date in that it is not marked as compatible anymore. But it still works and you can override the incompatibility with the equally awesome MR Tech Toolkit extension.
There's no "process explorer" kind of tool for Firefox; but there's https://developer.mozilla.org/en-US/docs/Archive/Mozilla/Venkman with profiling mode, which you could use to see the time spent by chrome (meaning non-content, that is not web-page) scripts.
From what I've read about it, DTrace might also be useful for this sort of thing, but it requires creating a custom build and possibly adding additional probes to the source. I haven't played with it myself yet.