im trying to do a downscale of an image using canvas to later use the data for a hash compare. however i noticed that the canvas (or at least the simple code i use) uses no mipmap filter resulting in very sharp result and makes the test against another existing hash fail (downscaling the image in gimp using linear works as expected). the code i use to downscale is
var canvas = document.createElement("canvas");
canvas.width = width; canvas.height = height;
var context = canvas.getContext('2d');
context.drawImage(image, 0, 0, width, height);
return context.getImageData(0, 0, width, height).data;
this results in this image (left) to the expected (right)
how can i get the canvas to downscale linear?
The new canvas draft specify a way to set re-sampling/interpolation for the canvas. The current method is always bi-linear, or nearest-neighbor if imageSmoothingEnabled = false (both methods are for both up-scaling and down-sampling). The new property is called imageSmoothingQuality:
context . imageSmoothingQuality [ = value ]
The value can be "low", "medium" and "high" (for example something like bi-cubic/Lanczos). However, no browsers has yet implemented this at the moment of writing this and the actual algorithms used for each value is not mandated.
The alternative approaches is to manually re-sample when you need changes above 50% using multiple steps, or to implement a re-sampling algorithm.
Here is an example of multiple steps to achieve bi-cubic quality level (and avoids initial CORS problems), as well as one showing the Lanczos algorithm (need CORS requirements to be met).
In addition to that you can apply sharpening convolution to compensate for some of the lost sharpness.
Related
So there is a game online that uses WebGL2 and WebAssembly. My goal is to interact with the game by script. It uses pointers internally which makes it hard to read data from the game data. That's why I decided to go over the UI using the WebGL context. I'm very new to WebGL, graphics and rendering in general and have no idea what I'm actually doing.
I've found the canvas and can execute methods on it. My first step is to take screenshots of areas using WebGL that I may use to analyze parts of the UI. For that I'm using WebGLRenderingContext#readPixels. Here's a snippet reading the whole canvas and saving its' pixels as RGBA:
const canvas = document.getElementById("canvas");
const ctx = canvas.getContext("webgl2");
const pixels = new Uint8Array(ctx.drawingBufferWidth * ctx.drawingBufferHeight * 4);
ctx.readPixels(0, 0, ctx.drawingBufferWidth, ctx.drawingBufferHeight, ctx.RGBA, ctx.UNSIGNED_BYTE, pixels)
// Returns only black / white
pixels.findIndex(pixel => pixel !== 0 && pixel !== 255); // -1
So in this case, there are only black pixels, all 4-tuples equal (0,0,0,255). A method to draw those pixels in a temporary canvas and download its' ImageData as png creates a black image.
What's the reason behind this and how can I fix it?
For performance reasons the WebGl's drawing buffer gets cleared after drawing. https://www.khronos.org/registry/webgl/specs/latest/1.0/#2.2
Any calls to readPixels() will just return empty data.
To keep it's content you need to set the preserveDrawingBuffer flag to true wen getting the drawing context via the getContext("webgl") function.
So change this
const ctx = canvas.getContext("webgl2");
to
const ctx = canvas.getContext("webgl2", {preserveDrawingBuffer: true});
I'd like to dynamically downsize some images on my canvas using createjs, and then store the smaller images to be displayed when zooming out of the canvas for performance reasons. Right now, I'm using the following code:
var bitmap = createjs.Bitmap('somefile.png');
// wait for bitmap to load (using preload.js etc.)
var oc = document.createElement('canvas');
var octx = oc.getContext('2d');
oc.width = bitmap.image.width*0.5;
oc.height = bitmap.image.height*0.5;
octx.drawImage(bitmap.image, 0, 0, oc.width, oc.height);
var dataUrl = oc.toDataURL('image/png'); // very expensive
var smallBitmap = new createjs.Bitmap(dataUrl);
This works, but:
The toDataURL operation is very expensive when converting to image/png and too slow to use in practice (and I can't convert to the faster image/jpeg due to the insufficient quality of the output for all settings I tried)
Surely there must be a way to downsize the image without having to resort to separate canvas code, and then do a conversion manually to draw onto the createjs Bitmap object??
I've also tried:
octx.drawImage(bitmap.image, 0, 0, oc.width, oc.height);
var smallBitmap = new createjs.Bitmap(oc);
But although very fast, this doesn't seem to actually work (and in any case I'm having to create a separate canvas element every time to facilitate this.)
I'm wondering if there is a way that I can use drawImage to draw a downsampled version of the bitmap into a createjs Bitmap instance directly without having to go via a separate canvas object or do a conversion to string?
If I understand correctly, internally this is how the createjs cache property works (i.e. uses drawImage internally to write into the DisplayObject) but I'm unable to figure out how use it myself.
You have tagged this post with createjs and easeljs, but your examples show plain Canvas context usage for scaling.
You can use the scale parameter on Bitmap.cache() to get the result you want, then reuse the cacheCanvas as necessary.
// This will create a half-size cache (50%)
// But scale it back up for you when it displays on the stage
var bmp = new createjs.Bitmap(img);
bmp.cache(0, 0, img.width, img.height, 0.5);
// Pull out the generated cache and use it in a new Bitmap
// This will display at the new scaled size.
var bmp2 = new createjs.Bitmap(bmp.cacheCanvas);
// Un-cache the first one to reset it if you want
bmp.uncache();
Here is a fiddle to see it in action: http://jsfiddle.net/lannymcnie/ofdsyn7g/
Note that caching just uses another canvas with a drawImage to scale it down. I definitely would stay away from toDataURL, as it not performant at all.
My Problem: I've got an ImageObject, that is being used to create a PatternObject. My problem is, that i changed the width and height properties of the Image before creating the pattern, but that doesn't actually remap the image (and that is also no problem). The thing is, that I now have got a pattern, that is of a different size (the original image size) than the image itself. If I want to draw a line with the fillStyle of that pattern, it doesn't fit (because I need a pattern of the new size).
My question: Is there an easy way, to achieve that the pattern's width and height can be adjusted?
My tries and why I dont like them:
1) Render the original image to a canvas with the new size and create the pattern from that. Didn't use this, because the pattern cannot be loaded directly as a result of the canvas being created and rendered to too slowly. But I want that pattern directly
2) Calculate the variance between the new image size and the original one, change the lineWidth of the context, so the patterns height fits exactly and scale the line down, so it has a nice size. Didn't use that because I render in realtime and this is way too slow to be used later in webapps.
Using canvas (your step 1) is the most flexible way.
It's not slower using a canvas to draw on another canvas than using an image directly. They both use the same element basis (you're blitting a bitmap just as you do with an image).
(Update: Drawing using pattern as style do go through an extra step of a local transformation matrix for the pattern in more recent browsers.)
Create a new canvas in the size of the pattern and simple draw the image into it:
patternCtx.drawImage(img, 0, 0, patternWidth, patternHeight);
Then use the canvas of patternCtx as basis for the pattern (internally the pattern caches this image the first time it's drawn, from there, if possible, it just doubles out what it has until the whole canvas is filled).
The other option is to pre-scale the images to all the sizes you need them to be, load them all in, and then choose the image which size is the one you need.
The third is to draw the image yourself as a pattern. This however is not so efficient compared to the built-in method, though using the above mentioned method (internal) you can get a usable result.
Example of manual patterning:
var ctx = canvas.getContext('2d');
var img = new Image();
img.onload = function() {
fillPattern(this, 64, 64);
change.onchange = change.oninput = function() {
fillPattern(img, this.value, this.value);
}
};
img.src = "//i.stack.imgur.com/tkBVh.png";
// Fills canvas with image as pattern at size w,h
function fillPattern(img, w, h) {
//draw once
ctx.drawImage(img, 0, 0, w, h);
while (w < canvas.width) {
ctx.drawImage(canvas, w, 0);
w <<= 1; // shift left 1 = *2 but slightly faster
}
while (h < canvas.height) {
ctx.drawImage(canvas, 0, h);
h <<= 1;
}
}
<input id=change type=range min=8 max=120 value=64><br>
<canvas id=canvas width=500 height=400></canvas>
(or with a video as pattern).
I am having difficulties rendering several patterns (each with different texture) in the 2d context of HTML5 canvas.
Assuming I have three separate canvases, two off-screen containing different textures and one for rendering. Let these offline canvases be A and B.
Then:
var patternA = ctx.createPattern(A, "repeat-x");
ctx.fillStyle = patternA;
ctx.fillRect(100,100,20,20);
var patternB = ctx.createPattern(B, "repeat-y");
ctx.fillStyle = patternB;
ctx.fillRect(150,100,20,20);
There should be two 20x20 rectangles, each with their own pattern, however the second rectangle doesn't render at all. I've tried everything to get them working, but to no avail.
Why is that? How should I render multiple tiling textures onto the same canvas?
What browsers are you trying? With FireFox and Chrome, I couldn't get either pattern to render with repeat-x or repeat-y. Instead, I was able to get both to render with just repeat. (See http://jsfiddle.net/ZthsS/1/)
It is possible that browsers have an incomplete implementation of the specification. According to the implementation status at http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#dom-context-2d-createpattern, IE beta and FF nightly pass all test cases but other browsers don't. I would recommend just using repeat for the time being. You could emulate repeat-x and repeat-y by simply limiting the width of the fillRect to the width of the pattern:
var patternA = ctx.createPattern(A, "repeat");
ctx.fillStyle = patternA;
ctx.fillRect(100,100,20,Math.min(20, A.height));
var patternB = ctx.createPattern(B, "repeat");
ctx.fillStyle = patternB;
ctx.fillRect(150,100,Math.min(20, B.width), 20);
I'm currently trying to create a page with dynamically generated images, which are not shapes, drawn into a canvas to create an animation.
The first thing I tried was the following:
//create plenty of those:
var imageArray = ctx.createImageData(0,0,16,8);
//fill them with RGBA values...
//then draw them
ctx.putImageData(imageArray,x,y);
The problem is that the images are overlapping and that putImageData simply... puts the data in the context, with no respect to the alpha channel as specified in the w3c:
pixels in the canvas are replaced wholesale, with no composition, alpha blending, no shadows, etc.
So I thought, well how can I use Images and not ImageDatas?
I tried to find a way to actually put the ImageData object back into an image but it appears it can only be put in a canvas context. So, as a last resort, I tried to use the toDataURL() method of a 16x8 canvas(the size of my images) and to stick the result as src of my ~600 images.
The result was beautiful, but was eating up 100% of my CPU...(which it did not with putImageData, ~5% cpu) My guess is that for some unknown reason the image is re-loaded from the image/png data URI each time it is drawn... but that would be plain weird... no? It also seems to take a lot more RAM than my previous technique.
So, as a result, I have no idea how to achieve my goal.
How can I dynamically create alpha-channelled images in javascript and then draw them at an appreciable speed on a canvas?
Is the only real alternative using a Java applet?
Thanks for your time.
Not knowing, what you really want to accomplish:
Did you have a look at the drawImage-method of the rendering-context?
Basically, it does the composition (as specified by the globalCompositeOperation-property) for you -- and it allows you to pass in a canvas element as the source.
So could probably do something along the lines of:
var offScreenContext = document.getCSSCanvasContext( "2d", "synthImage", width, height);
var pixelBuffer = offScreenContext.createImageData( tileWidth, tileHeight );
// do your image synthesis and put the updated buffer back into the context:
offScreenContext.putImageData( pixelBuffer, 0, 0, tileOriginX, tileOriginY, tileWidth, tileHeight );
// assuming 'ctx' is the context of the canvas that actually gets drawn on screen
ctx.drawImage(
offScreenContext.canvas, // => the synthesized image
tileOriginX, tileOriginY, tileWidth, tileHeight, // => frame of offScreenContext that get's drawn
originX, originY, tileWidth, tileHeight // => frame of ctx to draw in
);
Assuming that you have an animation you want to loop over, this has the added benefit of only having to generate the frames once into some kind of sprite-map so that in subsequent iterations you'll only ever need to call ctx.drawImage() -- at the expense of an increased memory footprint of course...
Why don't you use SVG?
If you have to use canvas, maybe you could implement drawing an image on a canvas yourself?
var red = oldred*(1-alpha)+imagered*alpha
...and so on...
getCSSCanvasContext seems to be WebKit only, but you could also create an offscreen canvas like this:
var canvas = document.createElement('canvas')
canvas.setAttribute('width',300);//use whatever you like for width and height
canvas.setAttribute('height',200);
Which you can then draw to and draw onto another canvas with the drawImage method.