I have a compression issue when im trying to import an image (into unity) from the server.
I have a jpg image called "glass" in the resources folder within unity and i want this image to be replaced by an image on a server at runtime. I found this script http://docs.unity3d.com/ScriptReference/WWW.LoadImageIntoTexture.html for importing the images and assigned it to my "glass" image.
The only problem is that the compression of the image is (NPOT) RGBA Compressed DXT5, while the code in the link states that jpg's are being compressed as DXT1.
Can any of you tell me what im doing wrong?
#pragma strict
// random url link from google
// and DXT compress them at runtime
var url = "https://i.ytimg.com/vi/yaqe1qesQ8c/maxresdefault.jpg";
function Start () {
// Create a texture in DXT1 format
GetComponent.<Renderer>().material.mainTexture = new Texture2D(4, 4, TextureFormat.DXT1, false);
while(true) {
// Start a download of the given URL
var www = new WWW(url);
// wait until the download is done
yield www;
var Texture_1: Texture2D;
Texture_1 = Resources.Load("glass");
// assign the downloaded image to the main texture of the object
www.LoadImageIntoTexture(Texture_1);
}
}
I ran some tests, and it looks like a Unity bug to me.
In contradiction with the documentation, it does matter what was the texture format before LoadImageIntoTexture, and if texture is compressed, it is always DXT5.
Here is what happening:
If you load image into an previously uncompressed texture(RGB24 for example), the resulting format is uncompressed RGB24 or ARGB32 (depending whether or not the image contains alpha channel).
If you load image into a previously compressed texture, the result will be a texture compressed with DXT5. It does not matter whether or not the image has an alpha channel.
If you use www.texture instead of www.LoadImageIntoTexture, the result is always uncompressed texture (RGB24 or ARGB32).
Calling Texture.Compress() manually on uncompressed texture gives correct format (DXT1 or DXT5 depending on alpha channel).
Anyway, here is a workaround: instead of using
www.LoadImageIntoTexture(Texture_1);
use
// Load uncompressed RGB24 or ARGB32 depending on alpha channel
Texture_1 = www.texture;
// Compress with the correct format
Texture_1.Compress(true);
The result will be DXT1 for JPG or DTX5 for PNG, as it should to be.
P.S. This is not unique to JS, happens in C# too.
Related
I'm trying to download an image using node.js and puppeteer but I'm running into some issues. I'm using a webscraper to gather the links of the images from the site and then using the https/http package to download the image.
This works for the images using http and https sources but some images have links that look like this (the whole link is very long so I cut the rest):
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAw8AAADGCAYAAACU07w3AAAZuUlEQVR4Ae3df4yU930n8Pcslu1I1PU17okdO1cLrTD+g8rNcvRyti6247K5NG5S5HOl5hA2uZ7du6RJEGYPTFy1Nv4RUJy0cWVkeQ9ErqqriHNrR8niZuVIbntBS886rBZWCGHVsNEFRQ5BloPCzGn2B+yzZMLyaP........
I'm not sure how to handle these links or how to download the image. Any help would be appreciated.
You need to first decode the url from base64 using node.js Buffer.
// the content type image/png has to be removed first
const data = 'iVBORw0KGgoAAAANSUhEUgAAAw8AAADGCAYAAACU07w3AAAZuUlEQVR4Ae3df4yU930n8Pcslu1I1PU17okdO1cLrTD+g8rNcvRyti6247K5NG5S5HOl5hA2uZ7du6RJEGYPTFy1Nv4RUJy0cWVkeQ9ErqqriHNrR8niZuVIbntBS886rBZWCGHVsNEFRQ5BloPCzGn2B+yzZMLyaP';
const buffer = new Buffer(data);
const base64data = buff.toString('base64');
// after this you will get the url string and continue to fetch the image
These are the base64 encoded images (mostly used for icons and small images).
you can ignore it.
if(url.startsWith('data:')){
//base 64 image
} else{
// an image url
}
if you really want to mess with base64 I can give you a workaround.
import { parseDataURI } from 'dauria';
import mimeTypes from 'mime-types';
const fileContent = parseDataURI(file);
// you probably need an extension for that image.
let ext = mimeTypes.extension(fileContent.MIME) || 'bin';
fs.writeFile("a random file"+"."+ext, fileContent.buffer, function (err) {
console.log(err); // writes out file without error, but it's not a valid image
});
I am sending a jpg image from client. the image is sent in form of blob. Here is the code in javascript which is sending a canvas image
canvas.toBlob((blob) => {
this.setState({preview: blob},
() => {
console.log(blob.size);
let data = new FormData();
// const test = URL.createObjectURL(blob);
// console.log(test);
const file = new File([blob], "File name", {type: "image/png"});
data.append("file", file);
console.log(this.state.preview);
axios({
method: "POST",
data: data,
url: "/media",
headers: {
'Content-Type': 'multipart/form-data'
}
}).then().catch()
// axios.post("http://localhost:8085/api/media", data).then(r => {}).catch()
}
);
}, "image/jpg", 0)
The selected file is a jpg file which is converted to canvas for some reasons and then that is sent to the server as blob.
Here is the server image handling code which is written in java:
public void uploadMedia(MultipartFile mediaFile) {
try{
BufferedImage image = ImageIO.read(mediaFile.getInputStream());
System.out.println(image.getColorModel().getColorSpace().getType() == ColorSpace.TYPE_RGB); //TRUE
ImageIO.write(image, "jpg", new File("/Users/puspender/Desktop/Test/image.jpg")); //BLACK IMAGE with CMYK color space
ImageIO.write(image, "png", new File("/Users/puspender/Desktop/Test/image.png"));
}catch(Exception e){}
}
If I choose png format for ImageIO.write the image gets saved successfully fine but if the format is jpg then a black image gets saved and the color space of that black image gets changed to CMYK. Even though it was RGB with the original file and I have also printed it on console before saving the file.
This issue only comes up when on client the image is cropped(or something) and then a canvas is created from that and blob from that canvas is sent as the payload. No issue comes up when file is selected using <input type="file />
Update: As asked by Mike, I read and re-generated black jpg which gets generated by ImageIo. And the result is the same, a black image.
File file = new File("/location/generated_black.jpg");
BufferedImage image = ImageIO.read(file);
ImageIO.write(image, "jpg", new File("/location/from-java-main-400x400.jpg"));
I also tested the above code with the original file and that time ImageIo handles it correctly.
One important thing I noticed: the black image is actually not black, it is just the disturbed color because of change of color space. I noticed this when I upload this to AWS S3. Here is the image(CMYK)
The original file below:
I managed to work it correctly.
A great personal person on this thread suggested this solution and it even worked well.
But few questions remained the same:
Why this is only happening with blob data sent by javascript and
not with normal file selector as mentioned earlier
Why issue only when setting image format to jpg and not with
png? And why change of color space in jpg by ImageIo
RGB and CMYK are not colour spaces, but rather colour encoding models.
There is not enough information in either state to provide a meaningful solution.
An RGB colour space, as defined by the ISO 22028-1 standard in the glossary, stipulates that it must include definitions of:
1. Colour primaries
2. Transfer function(s)
3. White point
For CMYK, the responses must be clearly identified for variables such as paper type, ink composition, and illuminant.
Ignoring any of these facets means the resulting colour transformation is utterly meaningless, and doing no one any good.
In the HTML5 version of my LibGDX game, sometimes canvas.toDataUrl("image/png") returns a truncated string yielding a black image.
CanvasElement canvas = ((GwtApplication)Gdx.app).getCanvasElement();
String dataUrl = canvas.toDataUrl("image/png");
Window.open(dataUrl, "_blank", "");
The odd part is that sometimes it works. When it does work I get a ~100KiB image as expected, and the new window opens with an address bar just saying "data:". I can send this to a webservice and translate from Base64 into the bytes of a proper PNG and OSX preview shows it just fine too.
When it doesn't work the new window shows a black image of the correct dimensions, and an address bar with Base64-encoded data in (starting data:image/png;base64,iVBORw0KGgoAAAAN...), but ending in an elipsis that appears to be rendered by the browser UI rather than three periods in the actual data string. The data in this case is ~31KiB. When I try transcoding this via my webservice, I get the same black rectangle.
I see this happen in both Chome and Firefox.
Any ideas? The code to get the canvas contents is very simple, so I can't see how I can be doing that wrong. I'm thinking either a bug in the browsers, or some kind of timing issue with LibGDX and rendering?
This was caused by LibGDX not preserving the drawing buffer. The LibGDX guys very kindly fixed this in a nightly build that is now available.
Updating to the latest build of 1.0-SNAPSHOT and setting the below flag now works reliably:
#Override
public GwtApplicationConfiguration getConfig () {
if(config == null)
{
config = new GwtApplicationConfiguration(1280, 960);
config.preserveDrawingBuffer = true;
}
return config;
}
I have a png image file of size 1.8 MB.
While trying to copy paste the image file in chrome I am using DataTransferItem.getAsFile() method.
However the file object returned by the above call is of size ~11 MB.
Here is the code snippet:
items = clipboardItems.items;
item = this._getImageItem(items);
if(item) {
file = item.getAsFile();
}
file.size > 11MB
Why is there such a huge difference in file size?
Is there any way I can retain the original (or near to original) file size?
it looks like the getAsFile() function returns a blob of an bitmap image.
See http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2011-March/030891.html for a discussion an that
I have the following code to write an image into the filesystem, and read it back for display. Prior to trying out the filesystem API, I loaded the whole base64 image into the src attribute and the image displayed fine. Problem is the images can be large so if you add a few 5MB images, you run out of memory. So I thought I'd just write them to the tmp storage and only pass the URL into the src attribute.
Trouble is, nothing gets displayed.
Initially I thought it might be something wrong with the URL, but then I went into the filesystem directory, found the image it was referring to and physically replaced it with the real binary image and renamed it to the same as the replaced image. This worked fine and the image is displayed correctly, so the URL looks good.
The only conclusion I can come to is that the writing of the image is somehow wrong - particularly the point where the blob is created. I've looked through the blob API and can't see anything that I may have missed, however I'm obviously doing something wrong because it seems to be working for everyone else.
As an aside, I also tried to store the image in IndexedDB and use the createObjectURL to display the image - again, although the URL looks correct, nothing is displayed on the screen. Hence the attempt at the filesystem API. The blob creation is identical in both cases, with the same data.
The source data is a base64 encoded string as I mentioned. Yes, I did also try to store the raw base64 data in the blob (with and without the prefix) and that didn't work either.
Other info - chrome version 28, on linux Ubuntu
//strip the base64 `enter code here`stuff ...
var regex = /^data.+;base64,/;
if (regex.test(imgobj)) { //its base64
imgobj = imgobj.replace(regex,"");
//imgobj = B64.decode(imgobj);
imgobj = window.atob(imgobj);
} else {
console.log("it's already :", typeof imgobj);
}
// store the object into the tmp space
window.requestFileSystem(window.TEMPORARY, 10*1024*1024, function(fs) {
// check if the file already exists
fs.root.getFile(imagename, {create: false}, function(fileEntry) {
console.log("File exists: ", fileEntry);
callback(fileEntry.toURL(), fileEntry.name);
//
}, function (e) { //file doesn't exist
fs.root.getFile(imagename, {create: true}, function (fe) {
console.log("file is: ", fe);
fe.createWriter(function(fw){
fw.onwriteend = function(e) {
console.log("write complete: ", e);
console.log("size of file: ", e.total)
callback(fe.toURL(), fe.name);
};
fw.onerror = function(e) {
console.log("Write failed: ", e.toString());
};
var data = new Blob([imgobj], {type: "image/png"});
fw.write(data);
}, fsErrorHandler);
}, fsErrorHandler);
});
// now create a file
}, fsErrorHandler);
Output from the callback is:
<img class="imgx" src="filesystem:file:///temporary/closed-padlock.png" width="270px" height="270px" id="img1" data-imgname="closed-padlock.png">
I'm at a bit of a standstill unless someone can provide some guidance...
UPDATE
I ran a test to encode and decode the base64 image with both the B64encoder/decoder and atob/btoa -
console.log(imgobj); // this is the original base64 file from the canvas.toDataURL function
/* B64 is broken*/
B64imgobjdecode = B64.decode(imgobj);
B64imgobjencode = B64.encode(B64imgobjdecode);
console.log(B64imgobjencode);
/* atob and btoa decodes and encodes correctly*/
atobimgobj = window.atob(imgobj);
btoaimgobj = window.btoa(atobimgobj);
console.log(btoaimgobj);
The results show that the btoa/atob functions work correctly but the B64 does not - probably because the original encoding didn't use the B64.encode function...
The resulting file in filesystem TEMPORARY, I ran through an online base64 encoder for comparison and the results are totally different. So the question is - while in the filesystem temp storage, is the image supposed to be an exact image, or is it padded with 'something' which only the filesystem API understands? Remember I put the original PNG in the file system directory and the image displayed correctly, which tends to indicate that the meta-data about the image (eg. the filename) is held elsewhere...
Can someone who has a working implementation of this confirm if the images are stored as images in the filesystem, or are padded with additional meta-data?
So to answer my own question - the core problem was in the base64 encoding/decoding - I've since then changed this to use things like ajax and responseTypes like arraybuffer and blob and things have started working.
To answer the last part of the question, this is what I've found - in the filesystem tmp storage, yes the file is supposed to be an exact binary copy - verified this in chrome and phonegap.