I am wanting to take an image that I created by changing pixel data and use it as a mask. I guess the big question is how can I turn a dataUrl into an image so I can use it ask a mask. As I see it the dataurl is a base64 image. I need to get the real image from the base64.
I am currently trying to use Ben Barnett Canvas Utility for the masking. I am open to using something else to mask with if it can use a dataUrl.
If I understand correctly, you have the base64 data for the image, and you need an Image object. All you need to do is:
var img = new Image();
img.src = dataurl;
Related
When you load a WebGL texture directly from a DOM image, how do you tell if the image has an alpha channel or not? Is there anyway except to guess based on the filename (e.g. "contains .PNG may be RGBA otherwise RGB"). There is a width and height in the DOM image, but nothing I can see that says what format it is. i.e.:
const img = await loadDOMImage(url);
const format = gl.RGBA; //Does this always need to be RGBA? I'm wasting space in most cases where its only RGB
const internalFormat = gl.RGBA;
const type = gl.UNSIGNED_BYTE; //This is guaranteed to be correct, right? No HDR formats supported by the DOM?
gl.texImage2D(gl.TEXTURE_2D, 0, internalFormat, format, type, img);
My load function looks like this FWIW:
async loadDOMImage(url) {
return new Promise(
(resolve, reject)=>{
const img = new Image();
img.crossOrigin = 'anonymous';
img.addEventListener('load', function() {
resolve(img);
}, false);
img.addEventListener('error', function(err) {
reject(err);
}, false);
img.src = uri;
}
);
}
how do you tell if the image has an alpha channel or not?
You can't. You can only guess.
you could see the URL ends in .png and assume it has alpha. You might be wrong
you could draw image into a 2D canvas then call getImageData, read all the alpha pixels and see if any of them are not 255
const format = gl.RGBA;
Does this always need to be RGBA? I'm wasting space in most cases where its only RGB
It's unlikely to waste space. Most GPUs work best with RGBA values so even if you choose RGB it's unlikely to save space.
const type = gl.UNSIGNED_BYTE;
This is guaranteed to be correct, right?
texImage2D takes the image you pass in and converts it to type and format. It then passes that converted data to the GPU.
No HDR formats supported by the DOM?
That is undefined and browser specific. I know of no HDR image formats supported by any browsers. What image formats a browser supports is up to the browser. For example Firefox and Chrome support animated webp but Safari does not.
A common question some developers have is they want to know if they should turn on blending / transparency for a particular texture. Some mistakenly believe if the texture has alpha they should, otherwise they should not. This is false. Whether blending / transparency should be used is entirely separate from the format of the image and needs to be stored separately.
The same is true with other things related to images. What format to upload the image into the GPU is application and usage specific and has no relation to the format of the image file itself.
From what I've been able to find, the only way to get from imgURL to an imgData array is to create a canvas and draw an image onto that canvas. Why is there no direct way to convert between imgURL and an imgData array?
I want to load an image, copy it e.g. 2 times, and manipulate the other 2 images. Somehow I want to do some "post-processing" stuff with the two clones. I DON'T want to get canvas parts and manipulate them. I am programming a realtime 2D game and it would be madness to manipulate every image within every frame.
Many of the solutions I found were just "cutting" parts from the canvas, dealing with them and writing them back onto the canvas. But I want to have the manipulated images stored in imageObjects to be able to directly draw them as if they were the "real" image.
A canvas is still the best way of manipulating images. Canvas slices that you get via context.getImageData() can be painted back via context.putImageData() (see Pixel manipulation with canvas) so I don't see a real advantage of converting image data into images. However, if you prefer a "real" image object, you can use canvas.toDataURL() and create an image object with this URL. Something along these lines:
// Create temporary canvas
var canvas = document.createElement("canvas");
canvas.setAttribute("width", imageData.width);
canvas.setAttribute("height", imageData.height);
// Put image data into canvas
var context = canvas.getContext("2d");
context.putImageData(imageData, 0, 0);
// Extract canvas data into an image object
var image = new Image();
image.src = canvas.toDataURL("image/png");
image.onload = function() {alert("Image object can be used")};
Anyone know how I would convert bytes which are sent via a websocket (from a C# app) to an image? I then want to draw the image on a canvas. I can see two ways of doing this:
Somehow draw the image on the canvas in byte form without converting
it.
Convert the bytes to a base64 string somehow in javascript then
draw.
Here's my function which receives the bytes for drawing:
function draw(imgData) {
var img=new Image();
img.onload = function() {
cxt.drawImage(img, 0, 0, canvas.width, canvas.height);
};
// What I was using before...
img.src = "data:image/jpeg;base64,"+imgData;
}
I was receiving the image already converted as a base64 string before, but after learning that sending the bytes is smaller in size (30% smaller?) I would prefer to get this working. I should also mention that the image is a jpeg.
Anyone know how I would do it? Thanks for the help. :)
I used this in the end:
function draw(imgData, frameCount) {
var r = new FileReader();
r.readAsBinaryString(imgData);
r.onload = function(){
var img=new Image();
img.onload = function() {
cxt.drawImage(img, 0, 0, canvas.width, canvas.height);
}
img.src = "data:image/jpeg;base64,"+window.btoa(r.result);
};
}
I needed to read the bytes into a string before using btoa().
If your image is really a jpeg already, you can just convert the received data to a base64 stream. Firefox and Webkit browsers (as I recall) have a certain function, btoa(). It converts the input string to a base64 encoded string. Its counterpart is atob() that does the opposite.
You could use it like the following:
function draw(imgData){
var b64imgData = btoa(imgData); //Binary to ASCII, where it probably stands for
var img = new Image();
img.src = "data:image/jpeg;base64," + b64imgData;
document.body.appendChild(img); //or append it to something else, just an example
}
If the browser you target (IE, for example) isn't Firefox or a Webkit one, you can use one of the multiple conversion function lying around the internet (good one, it also provides statistics of performances in multiple browsers, if you're interested :)
I just started to work with canvas.
I need to simulate some image in pure canvas.
image => tool => [1, 20, 80, 45.....] => canvas => canvas render image
some picuture coordinates this picture but rendered(created) via canvas
Are there any tools that help to get image lines coordinates (to map)?
So, next, I could just use them, and get a pure canvas image.
If I understood your comment correctly, you either want to draw an image onto a canvas, or convert it to vector data and then draw that on the canvas.
Drawing an image on a canvas
This is by far the simplest solution. Converting raster images to vector data is a complicated process involving advanced algorithms, and still it's not perfect.
Rendering an image on a canvas is actually very simple:
// Get the canvas element on the page (<canvas id="canvas"> in the HTML)
var ctx = document.getElementById('canvas').getContext('2d');
// Create a new image object which will hold the image data that you want to
// render.
var img = new Image();
// Use the onload event to make the code in the function execute when the image
// has finished loading.
img.onload = function () {
// You can use all standard canvas operations, of course. In this case, the
// rotate function to rotate the image 45 degrees.
ctx.rotate(Math.PI / 4);
// Draw image at (0, 0)
ctx.drawImage(img, 0, 0);
}
// Tell the image object to load an image.
img.src = 'my_image.png';
Converting a raster image to vector data
This is a complicated process, so I won't give you the whole walkthrough. First of all, you can give up on trying to implement this yourself right now, because it requires a lot of work. However, there are applications and services that do this for you:
http://vectormagic.com/home
Works great, but you will have to pay for most of the functionality
How to convert SVG files to other image formats
A good list of applications that can do this for you
After this, you can store the vector data as SVG and use the SVG rendering that some browsers have, or a library such as SVGCanvas to render SVG onto a canvas. You can probably use that library to convert the resulting image to a list of context operations instead of converting from SVG every time.