CreateObjectURL memory leak in Chrome - javascript

I'm developing a web application which everyone can edit image on the internet directly.
While developing the site, I have faced a big problem with opening local system files.
Typically, we can do that with two ways as already known like below.
First, to use FileReader.
// render the image in our view
function renderImage(file) {
// generate a new FileReader object
var reader = new FileReader();
// inject an image with the src url
reader.onload = function(event) {
the_url = event.target.result
$('#some_container_div').html("<img src='" + the_url + "' />")
}
// when the file is read it triggers the onload event above.
reader.readAsDataURL(file);
}
// handle input changes
$("#the-file-input").change(function() {
console.log(this.files)
// grab the first image in the FileList object and pass it to the function
renderImage(this.files[0])
});
second, to use createObjectURL and revokeObjectURL.
window.URL = window.URL || window.webkitURL;
var fileSelect = document.getElementById("fileSelect"),
fileElem = document.getElementById("fileElem"),
fileList = document.getElementById("fileList");
fileSelect.addEventListener("click", function (e) {
if (fileElem) {
fileElem.click();
}
e.preventDefault(); // prevent navigation to "#"
}, false);
function handleFiles(files) {
if (!files.length) {
fileList.innerHTML = "<p>No files selected!</p>";
} else {
fileList.innerHTML = "";
var list = document.createElement("ul");
fileList.appendChild(list);
for (var i = 0; i < files.length; i++) {
var li = document.createElement("li");
list.appendChild(li);
var img = document.createElement("img");
img.src = window.URL.createObjectURL(files[i]);
img.height = 60;
img.onload = function() {
window.URL.revokeObjectURL(this.src);
}
li.appendChild(img);
var info = document.createElement("span");
info.innerHTML = files[i].name + ": " + files[i].size + " bytes";
li.appendChild(info);
}
}
}
In my case, both of them do not work well in Chrome browser. (IE is fine)
I could open the local files by using both of them. But also, those always made memory leaking even though I exactly called revokeObjectURL when I used second way.
I have already checked that the blobs are released well from chrome://blob-internals/. All of blob had released well. But, Chrome had still hold physical memory and the memory was not released forever unless I refresh the page. Eventually, Chrome was crashed when the memory usage was up to 1.5GB.
FileReader showed me the same resulting although I released refs. Besides, the way showed terrible I/O performance.
http://ecobyte.com/tmp/chromecrash-1a.html (by logidelic)
Here is a test page. You can test this problem with just drop files onto the green DOM. The testing page is using createObjectURL/revokeObjectURL method.
When you do this testing, you could see the physical memory consumption from task manager (Shift + ESC) or your own OS task manager.
Did I miss something or is it a bug as already known?
Please, somebody help me! If you know another way to resolve this, please tell me.

I have the same problem with the createObjectURL method. In the end, I find that the memory can be released by adding the code in the onload function:
this.src = '';
However, the image will disappear from the web page, as you may expect.
In addition, I also notice that sometimes the chrome (50.0.2661.102) or chrome canary (52.0.2740.0) fails to release the memory even with this.src= ''. Once it happens, you need to restart chrome. Simply refresh the page doesn't work.
I have tried the readAsDataURL method, too. The memory can be released well (even as the chrome fails to release memory for createObjectURL with this.src=''.) However, the drawback is that the speed is rather slow (around 10x longer as compared to createObjectURL).

Related

Preload images asynchronously into the browser cache when the image has to be generated on the server first

been reading stackoverflow for a couple years now, but never posted. Until today - ran into an issue I could not solve by myself and did not find a solution for.
The scenario: I have a dynamic webpage which basically shows screenshots of websites. These screenshots are generated on the fly for every new user and their URLs change. I want to preload these images into the browser cache so they're available in 0ms once the user clicks on a link. I don't want the subjective load time of the page increased, so they have to be loaded invisibly in the background.
My approach:
I used jelastic as my infrastructure to be able to scale later, then installed centOS with nginx, PHP and PhantomJS. I use PHP to query phantomJS to make the screenshots:
exec ("phantomjs engine.js ".$source." ".$filez. " > /dev/null &");
The dev/null is used to not increase the load time to the user.
I output the links to the browser. So far it works. Now I want to preload these images:
for (var i = 0; i < document.links.length; i++) {
imgArray[i] = new Image(1,1);
imgArray[i].visibility = 'hidden';
imgArray[i].src = (document.links[i].href.substr(7) + ".png");
document.links[i].href = 'javascript: showtouser("' + imgArray[i].src.substr(7) + '");';
}
Two things I proably did wrong here:
I start the image preloading before the images are generated on the server. I haven't found a way to start the caching only once the image has been generated by phantomJS. Onload event obviously does not work here.
I think my approach is not really async and it would increase the subjective loading time felt by the user
What am I doing wrong? I'm an ISP guy, I suck at javascript :/
You're approach was async. What you needed was a mechanism to identify image that didn't load, and retry.
This script will preload images, retry if failed, and hide links of images that don't exist even after retries (demo):
var links = Array.prototype.slice.call(document.links, 0); // Converting links to a normal array
links.forEach(prepareImageLink);
function prepareImageLink(link) {
var retries = 3; // current image number of retries
var image = new Image();
/** option: hide all links then reveal those with preloaded image
link.style.visibility = 'hidden';
image.onload = function() {
link.style.visibility = '';
};
**/
image.onerror = function() {
if(retries === 0) {
link.style.visibility = 'hidden'; // hide the link if image doesn't exist
//link.remove(); // option - remove the link if image doesn't exist
return;
}
retries--;
setTimeout(function() {
image.src = image.src + '?' + Date.now(); // for image to reload by adding the current dateTime to url
}, 1000); // the amount of time to wait before retry, change it to fit you're system
};
image.src = (link.href + ".png"); // add .substr(7) in your case
/** This is instead of 'javascript: showtouser("' + imgArray[i].src.substr(7) + '");'; which won't work on some browsers **/
link.addEventListener('mouseover', function() {
document.getElementById('image').src = image.src; // change to your showtouser func
});
}

Pasting large files

I'm writing a plugin to handle file uploads. I thought implementing a paste feature would be awesome (how often have you event just wanted to paste instead of having to open a photo editor and then save it as file and then upload, but I digress). What I'm doing so far works, except for when the file being pasted becomes too big. I cannot tell you what size 'too big' is, because I'm doing a screenshot selection and saving it to the clipboard.
My current code looks like
document.getElementById('AJS').onpaste = function (e) {
var items = (e.clipboardData || e.originalEvent.clipboardData).items,
blob = items[0].getAsFile();
if (blob && blob.type.match(T.s.accept) && T.currentlength < T.s.maxFiles) {
T.process(param1, param2, param3, param4, items[0].getAsFile());
}
};
T.process
T.process = function (file, i, changing, target, pasteblob) {
var fr = new FileReader();
fr.onload = function (e) {
var blob = pasteblob || new Blob([e.target.result], {type: file.type});
var dataURL = (win.URL || win.webkitURL).createObjectURL(blob);
var index = changing ? i : T.currentlength;
var filedata = {};
if (file.type.match('image/*')) {
var img = new Image();
img.onload = function () {
// Doing stuff
};
img.src = dataURL;
} else {
// Doing stuff
}
};
fr.readAsArrayBuffer(pasteblob || file);
};
For larger files, the blob from blob = items[0].getAsFile() returns a size of 0. Has anyone else experienced this problem and how have you been able to overcome it?
Note: I'm using the latest Chrome on Ubuntu 14.04
Although I don't have any references to anyone other than my own personal research into the matter, it seems as if there is a bug on the Ubuntu version of chrome that prevents users copying and pasting using the native JS api. If you attempt to paste a screenshot in a GMail message using Ubuntu Chrome, you get an error, but this error doesn't come up in any other version of chrome. It was also the same for my code above as well. I tested it on native windows and OS X environments running chrome that were pointing to my machine, and pasting using this script worked just fine!

Deploying webapp coded in impact javascript game engine locally, results in CORS on chrome

I am trying to make a webapp (developed with impact js game engine), able to run locally without the need of a localhost (using file:///C:/...) and I am required to make it work on chrome.
The main issue that it does not work on chrome is that chrome blocks my media (mainly images in png/jpg) from being loaded from media folder due to CORS issues.
After spending a few days reading up and trying a few methods I am not able to resolve this issue. Anybody with experience in this please tell me if it is possible and if it is, what methods should I go about on this.
Methods I have tried:
1) setting img.crossOrigin = "anonymous" (failed, this is still blocked by chrome)
2) opening chrome with flag --allow-file-access-from-files (worked, but not a feasible method for end user)
3) reading images and converting them to data uri format (failed, data uri conversion seems
to be not working due to inherent CORs issue)
4) attempted to use appcache to cache all images into browser cache (failed, do not seem to work as it is not being accessed from a webserver)
UPDATE: I am now trying to edit the impact.image source code to try to convert the src to data url at the point of loading into the image
load: function( loadCallback ) {
function getBase64Image(img) {
// Create an empty canvas element
var img2 = document.createElement("img");
var canvas2 = document.createElement("canvas");
// Copy the image contents to the canvas
var ctx2 = canvas2.getContext("2d");
img2.onload = function(){
canvas2.width = img2.width;
canvas2.height = img2.height;
};
img2.onerror = function() {console.log("Image failed!");};
img2.src = img + '?' + Date.now();
ctx2.drawImage(img2, 0, 0, img2.width, img2.height);
return canvas2.toDataURL("image/png");
}
if( this.loaded ) {
if( loadCallback ) {
loadCallback( this.path, true );
}
return;
}
else if( !this.loaded && ig.ready ) {
this.loadCallback = loadCallback || null;
this.data = new Image();
this.data.onload = this.onload.bind(this);
this.data.onerror = this.onerror.bind(this);
//this.data.src = ig.prefix + this.path + ig.nocache;
//old src sets to local file, new src sets to data url generated
this.data.src = getBase64Image(this.path);
}
else {
ig.addResource( this );
}
ig.Image.cache[this.path] = this;
},
for some reason the image is not being loaded into the function, will it work even if i get the image load to load into the getBase64Image function?
Short of saving everything as pre-generated, Base-64 data-uris, which you bake into a JS file, or a script tag on your "index.HTML" page, you aren't going to have much luck here -- especially if your intent is to distribute this to an audience disconnected from a webserver (to at least provide a domain for the appcache).
Mind you, in order to generate the data-uris, you, yourself are probably going to require localhost (or a build-tool).

Rapidly updating image with Data URI causes caching, memory leak

I have a webpage that rapidly streams JSON from the server and displays bits of it, about 10 times/second. One part is a base64-encoded PNG image. I've found a few different ways to display the image, but all of them cause unbounded memory usage. It rises from 50mb to 2gb within minutes. Happens with Chrome, Safari, and Firefox. Haven't tried IE.
I discovered the memory usage first by looking at Activity Monitor.app -- the Google Chrome Renderer process continuously eats memory. Then, I looked at Chrome's Resource inspector (View > Developer > Developer Tools, Resources), and I saw that it was caching the images. Every time I changed the img src, or created a new Image() and set its src, Chrome cached it. I can only imagine the other browsers are doing the same.
Is there any way to control this caching? Can I turn it off, or do something sneaky so it never happens?
Edit: I'd like to be able to use the technique in Safari/Mobile Safari. Also, I'm open to other methods of rapidly refreshing an image if anyone has any ideas.
Here are the methods I've tried. Each one resides in a function that gets called on AJAX completion.
Method 1 - Directly set the src attribute on an img tag
Fast. Displays nicely. Leaks like crazy.
$('#placeholder_img').attr('src', 'data:image/png;base64,' + imgString);
Method 2 - Replace img with a canvas, and use drawImage
Displays fine, but still leaks.
var canvas = document.getElementById("placeholder_canvas");
var ctx = canvas.getContext("2d");
var img = new Image();
img.onload = function() {
ctx.drawImage(img, 0, 0);
}
img.src = "data:image/png;base64," + imgString;
Method 3 - Convert to binary and replace canvas contents
I'm doing something wrong here -- the images display small and look like random noise. This method uses a controlled amount of memory (grows to 100mb and stops), but it is slow, especially in Safari (~50% CPU usage there, 17% in Chrome). The idea came from this similar SO question: Data URI leak in Safari (was: Memory Leak with HTML5 canvas)
var img = atob(imgString);
var binimg = [];
for(var i = 0; i < img.length; i++) {
binimg.push(img.charCodeAt(i));
}
var bytearray = new Uint8Array(binimg);
// Grab the existing image from canvas
var ctx = document.getElementById("placeholder_canvas").getContext("2d");
var width = ctx.canvas.width,
height = ctx.canvas.height;
var imgdata = ctx.getImageData(0, 0, width, height);
// Overwrite it with new data
for(var i = 8, len = imgdata.data.length; i < len; i++) {
imgdata.data[i-8] = bytearray[i];
}
// Write it back
ctx.putImageData(imgdata, 0, 0);
I know it's been years since this issue was posted, but the problem still exists in recent versions of Safari Browser. So I have a definitive solution that works in all browsers, and I think this could save jobs or lives!.
Copy the following code somewhere in your html page:
// Methods to address the memory leaks problems in Safari
var BASE64_MARKER = ';base64,';
var temporaryImage;
var objectURL = window.URL || window.webkitURL;
function convertDataURIToBlob(dataURI) {
// Validate input data
if(!dataURI) return;
// Convert image (in base64) to binary data
var base64Index = dataURI.indexOf(BASE64_MARKER) + BASE64_MARKER.length;
var base64 = dataURI.substring(base64Index);
var raw = window.atob(base64);
var rawLength = raw.length;
var array = new Uint8Array(new ArrayBuffer(rawLength));
for(i = 0; i < rawLength; i++) {
array[i] = raw.charCodeAt(i);
}
// Create and return a new blob object using binary data
return new Blob([array], {type: "image/jpeg"});
}
Then when you receive a new frame/image base64Image in base64 format (e.g. data:image/jpeg;base64, LzlqLzRBQ...) and you want to update a html <img /> object imageElement, then use this code:
// Destroy old image
if(temporaryImage) objectURL.revokeObjectURL(temporaryImage);
// Create a new image from binary data
var imageDataBlob = convertDataURIToBlob(base64Image);
// Create a new object URL object
temporaryImage = objectURL.createObjectURL(imageDataBlob);
// Set the new image
imageElement.src = temporaryImage;
Repeat this last code as much as needed and no memory leaks will appear. This solution doesn't require the use of the canvas element, but you can adapt the code to make it work.
Try setting image.src = "" after drawing.
var canvas = document.getElementById("placeholder_canvas");
var ctx = canvas.getContext("2d");
var img = new Image();
img.onload = function() {
ctx.drawImage(img, 0, 0);
//after drawing set src empty
img.src = "";
}
img.src = "data:image/png;base64," + imgString;
This might be helped
I don't think there are any guarantees given about the memory usage of data URLs. If you can figure out a way to get them to behave in one browser, it guarantees little if not nothing about other browsers or versions.
If you put your image data into a blob and then create a blob URL, you can then deallocate that data.
Here's an example which turns a data URI into a blob URL; you may need to change / drop the webkit- & WebKit- prefixes on browsers other than Chrome and possibly future versions of Chrome.
var parts = dataURL.match(/data:([^;]*)(;base64)?,([0-9A-Za-z+/]+)/);
//assume base64 encoding
var binStr = atob(parts[3]);
//might be able to replace the following lines with just
// var view = new Uint8Array(binStr);
//haven't tested.
//convert to binary in ArrayBuffer
var buf = new ArrayBuffer(binStr.length);
var view = new Uint8Array(buf);
for(var i = 0; i < view.length; i++)
view[i] = binStr.charCodeAt(i);
//end of the possibly unnecessary lines
var builder = new WebKitBlobBuilder();
builder.append(buf);
//create blob with mime type, create URL for it
var URL = webkitURL.createObjectURL(builder.getBlob(parts[1]))
return URL;
Deallocating is as easy as :
webkitURL.revokeObjectURL(URL);
And you can use your blob URL as your img's src.
Unfortunately, blob URLs do not appear to be supported in IE prior to v10.
API reference:
http://www.w3.org/TR/FileAPI/#dfn-createObjectURL
http://www.w3.org/TR/FileAPI/#dfn-revokeObjectURL
Compatibility reference:
http://caniuse.com/#search=blob%20url
I had a very similar issue.
Setting img.src to dataUrl Leaks Memory
Long story short, I simply worked around the Image element. I use a javascript decoder to decode and display the image data onto a canvas. Unless the user tries to download the image, they'll never know the difference either. The other downside is that you're going to be limited to modern browsers. The up side is that this method doesn't leak like a sieve :)
patching up ellisbben's answer, since BlobBuilder is obsoleted and https://developer.mozilla.org/en-US/Add-ons/Code_snippets/StringView provides what appears to be a nice quick conversion from base64 to UInt8Array:
in html:
<script src='js/stringview.js'></script>
in js:
window.URL = window.URL ||
window.webkitURL;
function blobify_dataurl(dataURL){
var parts = dataURL.match(/data:([^;]*)(;base64)?,([0-9A-Za-z+/]+)/);
//assume base64 encoding
var binStr = atob(parts[3]);
//convert to binary in StringView
var view = StringView.base64ToBytes(parts[3]);
var blob = new Blob([view], {type: parts[1]}); // pass a useful mime type here
//create blob with mime type, create URL for it
var outURL = URL.createObjectURL(blob);
return outURL;
}
I still don't see it actually updating the image in Safari mobile, but chrome can receive dataurls rapid-fire over websocket and keep up with them far better than having to manually iterate over the string. And if you know you'll always have the same type of dataurl, you could even swap the regex out for a substring (likely quicker...?)
Running some quick memory profiles, it looks like Chrome is even able to keep up with deallocations (if you remember to do them...):
URL.revokeObjectURL(outURL);
I have used different methods to solve this problem, none of them works. It seems that memory leaks when img.src = base64string and those memory can never get released. Here is my solution.
fs.writeFile('img0.jpg', img_data, function (err) {
// console.log("save img!" );
});
document.getElementById("my-img").src = 'img0.jpg?'+img_step;
img_step+=1;
Note that you should convert base64 to jpeg buffer.
My Electron app updating img every 50ms, and memory doesn't leak.
Forget about disk usage. Chrome's memory management piss me off.
Unless Safari or Mobile Safari don't leak data urls, server-side might be the only way to do this on all browsers.
Probably most straightforward would be to make a URL for your image stream, GETting it gives a 302 or 303 response redirecting to a single-use URL that will give the desired image. You will probably have to destroy and re-create the image tags to force a reload of the URL.
You will also be at the mercy of the browser regarding its img caching behavior. And the mercy of my understanding (or lack of understanding) of the HTTP spec. Still, unless server-side operation doesn't fit your requirements, try this first. It adds to the complexity of the server, but this approach uses the browser much more naturally.
But what about using the browser un-naturally? Depending on how browsers implement iframes and handle their associated content, you might be able to get data urls working without leaking the memory. This is kinda Frankenstein shit and is exactly the sort of nonsense that no one should have to do. Upside: it could work. Downside: there are a bazillion ways to try it and uneven, undocumented behavior is exactly what I'd expect.
One idea: embed an iframe containing a page; this page and the page that it is embedded in use cross document messaging (note the GREEN in the compatibility matrix!); embeddee gets the PNG string and passes it along to the embedded page, which then makes an appropriate img tag. When the embeddee needs to display a new message, it destroys the embedded iframe (hopefully releasing the memory of the data url) then creates a new one and passes it the new PNG string.
If you want to be marginally more clever, you could actually embed the source for the embedded frame in the embeddee page as a data url; however, this might leak that data url, which I guess would be poetic justice for trying such a reacharound.
"Something that works in Safari would be better." Browser technology keeps on moving forward, unevenly. When they don't hand the functionality to you on a plate, you gotta get devious.
var inc = 1;
var Bulk = 540;
var tot = 540;
var audtot = 35.90;
var canvas = document.getElementById("myCanvas");
//var imggg = document.getElementById("myimg");
canvas.width = 550;
canvas.height = 400;
var context = canvas.getContext("2d");
var variation = 0.2;
var interval = 65;
function JLoop() {
if (inc < tot) {
if (vid.currentTime < ((audtot * inc) / tot) - variation || (vid.currentTime > ((audtot * inc) / tot) + variation)) {
contflag = 1;
vid.currentTime = ((audtot * inc) / tot);
}
// Draw the animation
try {
context.clearRect(0, 0, canvas.width, canvas.height);
if (arr[inc - 1] != undefined) {
context.drawImage(arr[inc - 1], 0, 0, canvas.width, canvas.height);
arr[inc - 1].src = "";
//document.getElementById("myimg" + inc).style.display = "block";;
// document.getElementById("myimg" + (inc-1)).style.display = "none";
//imggg.src = arr[inc - 1].src;
}
$("#audiofile").val(inc);
// clearInterval(ref);
} catch (e) {
}
inc++;
// interval = 60;
//setTimeout(JLoop, interval);
}
else {
}
}
var ref = setInterval(JLoop, interval);
});
Worked for me on memory leak thanks dude.

Paste Clipboard Images Into Gmail Messages [duplicate]

I noticed a blog post from Google that mentions the ability to paste images directly from the clipboard into a Gmail message if you're using the latest version of Chrome. I tried this with my version of Chrome (12.0.742.91 beta-m) and it works great using control keys or the context menu.
From that behavior I need to assume that the latest version of webkit used in Chrome is able to deal with images in the Javascript paste event, but I have been unable to locate any references to such an enhancement. I believe ZeroClipboard binds to keypress events to trigger its flash functionality and as such wouldn't work through the context menu (also, ZeroClipboard is cross-browser and the post says this works only with Chrome).
So, how does this work and where the enhancement was made to Webkit (or Chrome) that enables the functionality?
I spent some time experimenting with this. It seems to sort of follow the new Clipboard API spec. You can define a "paste" event handler and look at event.clipboardData.items, and call getAsFile() on them to get a Blob. Once you have a Blob, you can use FileReader on it to see what's in it. This is how you can get a data url for the stuff you just pasted in Chrome:
document.onpaste = function (event) {
var items = (event.clipboardData || event.originalEvent.clipboardData).items;
console.log(JSON.stringify(items)); // might give you mime types
for (var index in items) {
var item = items[index];
if (item.kind === 'file') {
var blob = item.getAsFile();
var reader = new FileReader();
reader.onload = function (event) {
console.log(event.target.result); // data url!
};
reader.readAsDataURL(blob);
}
}
};
Once you have a data url you can display the image on the page. If you want to upload it instead, you could use readAsBinaryString, or you could put it into an XHR using FormData.
Edit: Note that the item is of type DataTransferItem. JSON.stringify might not work on the items list, but you should be able to get mime type when you loop over items.
The answer by Nick seems to need small changes to still work :)
// window.addEventListener('paste', ... or
document.onpaste = function (event) {
// use event.originalEvent.clipboard for newer chrome versions
var items = (event.clipboardData || event.originalEvent.clipboardData).items;
console.log(JSON.stringify(items)); // will give you the mime types
// find pasted image among pasted items
var blob = null;
for (var i = 0; i < items.length; i++) {
if (items[i].type.indexOf("image") === 0) {
blob = items[i].getAsFile();
}
}
// load image if there is a pasted image
if (blob !== null) {
var reader = new FileReader();
reader.onload = function(event) {
console.log(event.target.result); // data url!
};
reader.readAsDataURL(blob);
}
}
Example running code: http://jsfiddle.net/bt7BU/225/
So the changes to nicks answer were:
var items = event.clipboardData.items;
to
var items = (event.clipboardData || event.originalEvent.clipboardData).items;
Also I had to take the second element from the pasted items (first one seems to be text/html if you copy an image from another web page into the buffer). So I changed
var blob = items[0].getAsFile();
to a loop finding the item containing the image (see above)
I didn't know how to answer directly to Nick's answer, hope it is fine here :$ :)
As far as I know -
With HTML 5 features(File Api and the related) - accessing clipboard image data is now possible with plain javascript.
This however fails to work on IE (anything less than IE 10). Don't know much about IE10 support also.
For IE the optiens that I believe are the 'fallback' options are
either using Adobe's AIR api
or
using a signed applet

Categories