Canvas.toDataUrl sometimes truncated in Chrome/Firefox - javascript

In the HTML5 version of my LibGDX game, sometimes canvas.toDataUrl("image/png") returns a truncated string yielding a black image.
CanvasElement canvas = ((GwtApplication)Gdx.app).getCanvasElement();
String dataUrl = canvas.toDataUrl("image/png");
Window.open(dataUrl, "_blank", "");
The odd part is that sometimes it works. When it does work I get a ~100KiB image as expected, and the new window opens with an address bar just saying "data:". I can send this to a webservice and translate from Base64 into the bytes of a proper PNG and OSX preview shows it just fine too.
When it doesn't work the new window shows a black image of the correct dimensions, and an address bar with Base64-encoded data in (starting data:image/png;base64,iVBORw0KGgoAAAAN...), but ending in an elipsis that appears to be rendered by the browser UI rather than three periods in the actual data string. The data in this case is ~31KiB. When I try transcoding this via my webservice, I get the same black rectangle.
I see this happen in both Chome and Firefox.
Any ideas? The code to get the canvas contents is very simple, so I can't see how I can be doing that wrong. I'm thinking either a bug in the browsers, or some kind of timing issue with LibGDX and rendering?

This was caused by LibGDX not preserving the drawing buffer. The LibGDX guys very kindly fixed this in a nightly build that is now available.
Updating to the latest build of 1.0-SNAPSHOT and setting the below flag now works reliably:
#Override
public GwtApplicationConfiguration getConfig () {
if(config == null)
{
config = new GwtApplicationConfiguration(1280, 960);
config.preserveDrawingBuffer = true;
}
return config;
}

Related

Get pixel data back from programmatically generated image

I generate array of pixels in client side JavaScript code and convert it to a blob. Then I pass URL of the blob as image.src and revoke it at image.onload callback. I don't keep any references to the data, generated by the previous steps, so this data may be freed by GC.
There are many images generated this way, and it works fine. But sometimes user may want to save the generated image by clicking on a Save button near the image. I don't want to generate image again, because generation is slow, and the image is already generated and is visible on screen. So I want to get my pixels back from the image. I tried to create canvas again, draw image on it and then call toBlob, but browser treats this image as cross origin and throws an exception: "Failed to execute 'toBlob' on 'HTMLCanvasElement': tainted canvases may not be exported". Similar errors I get with canvas.toDataUrl and canvasContext.getImageData.
Is there a workaround for this problem?
I also tried to create canvases instead of images, but when I create the second canvas, the content of the first one clears.
Added
This error occurs only in the Chrome and other WebKit browsers. Firefox and MS Edge work fine. And when I commented out line of code that revoked blob url, this error disappeared - I can draw the image on the canvas and get its pixel data without CORS issues. But it is pointless to do so, because I already have blob that is not deleted.
But my page may generate many images - it depends on its user and is unlimited. Size of images is also unlimited - it may be useful to generate even 4096x4096 images. So I want to reduce memory consumption of my page as much as possible. And all these images should be downloadable. Generation of most images uses previously generated images, so to regenerate last image in a chain, I must to regenerate all images.
So I need a workaround only for Chrome browser.
Added 2
I tried to reproduce this problem in JS Fiddle, but couldn't. However locally my code doesn't work - I developed my app locally and I haven't tried running it on server. Create test.html file on your computer and open it in browser (locally, without server):
<html>
<body>
<pre id="log"></pre>
</body>
<script>
var log = document.getElementById("log");
var canvas = document.createElement("canvas");
canvas.width = canvas.height = 256;
var ctx = canvas.getContext("2d");
canvas.toBlob(function(blob) {
var img = new Image();
var blobUrl = URL.createObjectURL(blob);
img.src = blobUrl;
img.onload = function()
{
URL.revokeObjectURL(blobUrl);
ctx.drawImage(img, 0, 0);
try { canvas.toBlob(function(blob) { log.textContent += 'success\n'; }); }
catch(e) {log.textContent += e.message + '\n';}
};
});
</script>
</html>
It will print Failed to execute 'toBlob' on 'HTMLCanvasElement': Tainted canvases may not be exported..
So, I think, my workaround is to detect that page is run on combination of WebKit browser and file:/// protocol. And I can to defer revoking blob URL until page unload only for this combination.
That indeed sounds like a bug in chrome's implementation, you may want to report it.
What seems to happen is that chrome only checks if the image has been loaded through a clean origin when it is drawn for the first time on the canvas.
Since at this time, you already did revoke the blobURI, it can't map this URI to a clean origin (file:// protocol is quite strict on chrome).
But there seems to be a simple workaround:
By drawing this image on a [the reviver]* canvas before revoking the blobURI, chrome will mark the image as clean, and will remember it.
*[edit] Actually it seems it needs to be the same canvas...
So you can simply create your export canvas before-hand, and draw each images on it (to save memory you can even set it to 1x1px when you don't use it for the export) before you do revoke the image's src:
// somewhere accessible to the export logic
var reviver = document.createElement('canvas').getContext('2d');
// in your canvas to img logic
canvas.toBlob(function(blob){
var img = new Image();
img.onload = function(){
reviver.canvas.width = reviver.canvas.height = 1; // save memory
reviver.drawImage(this, 0,0); // mark our image as origin clean
URL.revokeObjectURL(this.src);
}
img.src = URL.createObjectURL(blob);
probablySaveInAnArray(img);
};
// and when you want to save your images to disk
function reviveBlob(img){
reviver.canvas.width = img.width;
reviver.canvas.height = img.height;
reviver.drawImage(img,0,0);
reviver.canvas.toBlob(...
}
But note that this method will create a new Blob, not retrieve the previous one, which has probably been Collected by the GarbageCollector at this time. But it is unfortunately your only way since you did revoke the blobURI...

canvas.toDataURL() download size limit

I am attempting to download an entire canvas image using canvas.toDataURL(). The image is a map rendered on the canvas (Open Layers 3).
In Firefox I can use the following to download the map on the click of a link:
var exportPNGElement = document.getElementById('export_image_button');
if ('download' in exportPNGElement) {
map.once('postcompose', function(event) {
var canvas = event.context.canvas;
exportPNGElement.href = canvas.toDataURL('image/png');
});
map.renderSync();
} else {
alert("Sorry, something went wrong during our attempt to create the image");
}
However in Chrome and Opera I'm hitting a size limit on the link. I have to physically make the window smaller for the download to work.
There are size limit differences between browsers, Chrome is particularly limiting. A similar post here (over 2 years old now) suggests an extensive server side workaround:
canvas.toDataURL() for large canvas
Is there a client side work around for this at all?
Check out toBlob https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement/toBlob
canvas.toBlob(function(blob) {
exportPNGElement.href = URL.createObjectURL(blob);
});
browser support is not as awesome as toDataURL though. But Chrome and Firefox have it, so it solves your biggest issue. The mdn link above also has a polyfill based on toDataURL, so you get the best possible support.
Just in case you didn't know, you can also dramatically reduce the size using jpeg compression
exportPNGElement.href = canvas.toDataURL('image/jpeg', 0.7);

js compression issue when loading image for www in unity

I have a compression issue when im trying to import an image (into unity) from the server.
I have a jpg image called "glass" in the resources folder within unity and i want this image to be replaced by an image on a server at runtime. I found this script http://docs.unity3d.com/ScriptReference/WWW.LoadImageIntoTexture.html for importing the images and assigned it to my "glass" image.
The only problem is that the compression of the image is (NPOT) RGBA Compressed DXT5, while the code in the link states that jpg's are being compressed as DXT1.
Can any of you tell me what im doing wrong?
#pragma strict
// random url link from google
// and DXT compress them at runtime
var url = "https://i.ytimg.com/vi/yaqe1qesQ8c/maxresdefault.jpg";
function Start () {
// Create a texture in DXT1 format
GetComponent.<Renderer>().material.mainTexture = new Texture2D(4, 4, TextureFormat.DXT1, false);
while(true) {
// Start a download of the given URL
var www = new WWW(url);
// wait until the download is done
yield www;
var Texture_1: Texture2D;
Texture_1 = Resources.Load("glass");
// assign the downloaded image to the main texture of the object
www.LoadImageIntoTexture(Texture_1);
}
}
I ran some tests, and it looks like a Unity bug to me.
In contradiction with the documentation, it does matter what was the texture format before LoadImageIntoTexture, and if texture is compressed, it is always DXT5.
Here is what happening:
If you load image into an previously uncompressed texture(RGB24 for example), the resulting format is uncompressed RGB24 or ARGB32 (depending whether or not the image contains alpha channel).
If you load image into a previously compressed texture, the result will be a texture compressed with DXT5. It does not matter whether or not the image has an alpha channel.
If you use www.texture instead of www.LoadImageIntoTexture, the result is always uncompressed texture (RGB24 or ARGB32).
Calling Texture.Compress() manually on uncompressed texture gives correct format (DXT1 or DXT5 depending on alpha channel).
Anyway, here is a workaround: instead of using
www.LoadImageIntoTexture(Texture_1);
use
// Load uncompressed RGB24 or ARGB32 depending on alpha channel
Texture_1 = www.texture;
// Compress with the correct format
Texture_1.Compress(true);
The result will be DXT1 for JPG or DTX5 for PNG, as it should to be.
P.S. This is not unique to JS, happens in C# too.

PhantomJS render of individual frame not working properly

I am using PhantomJS as a method of creating a local copy of a website, I have a function which traverses the frame structure of a website and grabs the frame contents as it goes, storing it in a global array. This part is working fine at the moment, the problem is:
At each step I am attempting to convert the frame to a Base64 encoded image using
var temp = require('webpage').create();
temp.content = currpage.frameContent; //set the temp page to be the current frame
var b64 = temp.renderBase64('png');
If I simply export currpage.frameContent to a file and open it, I can see it's contents as well as open it in a browser and see that it does indeed display what it is supposed to (ads, for the most part).
Although, the b64 variable has no value and there are no errors popping up when running the program.
I should also note that b64 doesn't always have no value, sometimes I do indeed get a proper rendering of the frame, depending on the site I am scraping.
After a while the issue has been uncovered, although we are switching directions with the project, I will post how to fix what my problem was.
In order to get the temp to render, all I had to do was set temp's viewport.
temp.viewportSize = {
width: 480,
height: 800
};

image shown rotated on iPad and not on laptop

I've created a small test site in which you can upload a picture. And without a round-trip to the backend, the selected picture is shown. So far nothing very interesting
$('input').on('change', function () {
var file = this.files[0];
var reader = new FileReader();
reader.onload = function (event) {
var base64 = event.target.result;
$('<img>').attr('src', base64).appendTo('body');
};
reader.readAsDataURL(file);
});
However, I noticed that on my iPad3 some pictures are shown up-side-down. I found on google about EXIF metadata which is stored in the image (base64) which defines the orientation of the picture. But another thing is, that on my laptop the image are shown normal (with the same pictures of course). Is there any way to prevent/fix this behaviour from happening ? (I want them to show the picture the same way, and if possible I also want them to be shown normal (not up-side-down))
This is not a CSS issue. It's actually an issue with the image. Some browsers interpret the orientation of the image through meta data. Simply open the image in any image editing software and export it. Upload it to your server and let me know if that worked!
EDIT - Reference this URL for a possible solution:
Accessing JPEG EXIF rotation data in JavaScript on the client side

Categories