I want to make responsive canvas with its context.
I have already made the canvas responsive; below you can see my resize function.
$(window).on('resize', function(event) {
event.preventDefault();
d.myScreenSize = {
x: $('.mainCanvasWrapper').width(),
y: $.ratio($('.mainCanvasWrapper').width())
};
console.log('d.myScreenSize ', d.myScreenSize);
var url = canvas.toDataURL();
var image = new Image();
ctx.canvas.width = d.myScreenSize.x;
ctx.canvas.height = d.myScreenSize.y;
image.onload = function() {
ctx.drawImage(image, 0, 0, $('#paper').width(), $('#paper').height());
//the drawing is being blurry (blurred). Please, have a look at the screenshot i have posted.
}
image.src = url;
if (d.drawer == 'true') {
socket.emit('windowResize', d.myScreenSize);
};
});
I have tried many solutions and also I don't want to use any library
can anyone suggest me better solution ?
You are scaling bitmap data which means you will have loss in quality when resizing it. If you went from a small size to a large size then the image will be blurry due to the interpolation that takes place.
What you would want to do is to store your drawings as vectors:
Keep the drawn points in arrays "internally"
When needed, redraw all the points to the canvas
When resized, scale the points accordingly, then redraw as above.
My suggestion when it comes to rescaling the points is to keep and use the original points as a basis every time as this will give you a more accurate scaling.
There are plenty of examples here on SO on how to store drawn points as well as redraw them. One that could be used for basis is for example: HTML canvas art, generate coordinates data from sketch.
Related
So there is a game online that uses WebGL2 and WebAssembly. My goal is to interact with the game by script. It uses pointers internally which makes it hard to read data from the game data. That's why I decided to go over the UI using the WebGL context. I'm very new to WebGL, graphics and rendering in general and have no idea what I'm actually doing.
I've found the canvas and can execute methods on it. My first step is to take screenshots of areas using WebGL that I may use to analyze parts of the UI. For that I'm using WebGLRenderingContext#readPixels. Here's a snippet reading the whole canvas and saving its' pixels as RGBA:
const canvas = document.getElementById("canvas");
const ctx = canvas.getContext("webgl2");
const pixels = new Uint8Array(ctx.drawingBufferWidth * ctx.drawingBufferHeight * 4);
ctx.readPixels(0, 0, ctx.drawingBufferWidth, ctx.drawingBufferHeight, ctx.RGBA, ctx.UNSIGNED_BYTE, pixels)
// Returns only black / white
pixels.findIndex(pixel => pixel !== 0 && pixel !== 255); // -1
So in this case, there are only black pixels, all 4-tuples equal (0,0,0,255). A method to draw those pixels in a temporary canvas and download its' ImageData as png creates a black image.
What's the reason behind this and how can I fix it?
For performance reasons the WebGl's drawing buffer gets cleared after drawing. https://www.khronos.org/registry/webgl/specs/latest/1.0/#2.2
Any calls to readPixels() will just return empty data.
To keep it's content you need to set the preserveDrawingBuffer flag to true wen getting the drawing context via the getContext("webgl") function.
So change this
const ctx = canvas.getContext("webgl2");
to
const ctx = canvas.getContext("webgl2", {preserveDrawingBuffer: true});
I'm working with the last version of three.js and I got a scene with a 2D mesh, with gradient color. The color depends of a value assigned to each vertex. I would like to get the value of every point of my gradient by clicking on it with the mouse (by getting the color, and make some math for my value)
I tried to use a system like this one, since it works here : http://jsbin.com/wepuca/2/edit?html,js,output
var canvas = document.getElementById("myCanvas");
var ctx = canvas.getContext("2d");
function echoColor(e){
var imgData = ctx.getImageData(e.pageX, e.pageX, 1, 1);
red = imgData.data[0];
green = imgData.data[1];
blue = imgData.data[2];
alpha = imgData.data[3];
console.log(red + " " + green + " " + blue + " " + alpha);
}
My problem is that this technique is used to get data from the canvas, when an image is displayed, and not a mesh. When I tried to use it on my mesh, I have that error :
"TypeError: ctx is null" or "imageData is not a function" (depending on how I declared my canvas and his context).
Since I didn't find if this method can be used with mesh, I'm here asking if it's possible or not, more than how to do it.
Thanks in advance
First, you cannot get a 2D context from a canvas which is already using a WebGL context. This is why you're getting the ctx is null error.
To get the pixel color, you have two options.
1. Transfer the WebGL canvas image onto a canvas with a 2D context.
To do this, create a new canvas and size it to be the same size as your THREE.js canvas, and get a 2D context from it. You can then draw the rendered image from the THREE.js canvas onto your new canvas as an image, using the 2D context:
var img2D = new Image();
twoDeeCanvasImage.addEventListener("load", function () {
ctx2D.clearRect(0, 0, canvas2D.width, canvas2D.height);
ctx2D.drawImage(img2D, 0, 0);
// from here, get your pixel data
});
img2D.src = renderer.domElement.toDataURL("img/png");
From there you can use getImageData, like you did before.
One caveat of this method is that you may see image degradation due to compression introduced when exporting the image via toDataURL. You can circumvent this in some cases:
if(ctx2D.imageSmoothingEnabled !== undefined){
ctx2D.imageSmoothingEnabled = false;
}
else if(ctx2D.mozImageSmoothingEnabled !== undefined){
ctx2D.mozImageSmoothingEnabled = false;
}
else if(ctx2D.webkitImageSmoothingEnabled !== undefined){
ctx2D.webkitImageSmoothingEnabled = false;
}
else{
console.warn("Browser does not support disabling canvas antialiasing. Results image may be blurry.");
}
2. You can also get a pixel value directly from the renderer, but it will cost you an extra render pass.
See this THREE.js example: https://threejs.org/examples/?q=instanc#webgl_interactive_instances_gpu
This example uses readRenderTargetPixels in the THREE.WebGLRenderer. https://threejs.org/docs/#api/renderers/WebGLRenderer
I don't have any personal experience using this method, so hopefullysomeone else can fill in some of the blanks.
My Problem: I've got an ImageObject, that is being used to create a PatternObject. My problem is, that i changed the width and height properties of the Image before creating the pattern, but that doesn't actually remap the image (and that is also no problem). The thing is, that I now have got a pattern, that is of a different size (the original image size) than the image itself. If I want to draw a line with the fillStyle of that pattern, it doesn't fit (because I need a pattern of the new size).
My question: Is there an easy way, to achieve that the pattern's width and height can be adjusted?
My tries and why I dont like them:
1) Render the original image to a canvas with the new size and create the pattern from that. Didn't use this, because the pattern cannot be loaded directly as a result of the canvas being created and rendered to too slowly. But I want that pattern directly
2) Calculate the variance between the new image size and the original one, change the lineWidth of the context, so the patterns height fits exactly and scale the line down, so it has a nice size. Didn't use that because I render in realtime and this is way too slow to be used later in webapps.
Using canvas (your step 1) is the most flexible way.
It's not slower using a canvas to draw on another canvas than using an image directly. They both use the same element basis (you're blitting a bitmap just as you do with an image).
(Update: Drawing using pattern as style do go through an extra step of a local transformation matrix for the pattern in more recent browsers.)
Create a new canvas in the size of the pattern and simple draw the image into it:
patternCtx.drawImage(img, 0, 0, patternWidth, patternHeight);
Then use the canvas of patternCtx as basis for the pattern (internally the pattern caches this image the first time it's drawn, from there, if possible, it just doubles out what it has until the whole canvas is filled).
The other option is to pre-scale the images to all the sizes you need them to be, load them all in, and then choose the image which size is the one you need.
The third is to draw the image yourself as a pattern. This however is not so efficient compared to the built-in method, though using the above mentioned method (internal) you can get a usable result.
Example of manual patterning:
var ctx = canvas.getContext('2d');
var img = new Image();
img.onload = function() {
fillPattern(this, 64, 64);
change.onchange = change.oninput = function() {
fillPattern(img, this.value, this.value);
}
};
img.src = "//i.stack.imgur.com/tkBVh.png";
// Fills canvas with image as pattern at size w,h
function fillPattern(img, w, h) {
//draw once
ctx.drawImage(img, 0, 0, w, h);
while (w < canvas.width) {
ctx.drawImage(canvas, w, 0);
w <<= 1; // shift left 1 = *2 but slightly faster
}
while (h < canvas.height) {
ctx.drawImage(canvas, 0, h);
h <<= 1;
}
}
<input id=change type=range min=8 max=120 value=64><br>
<canvas id=canvas width=500 height=400></canvas>
(or with a video as pattern).
I'm currently trying to create a page with dynamically generated images, which are not shapes, drawn into a canvas to create an animation.
The first thing I tried was the following:
//create plenty of those:
var imageArray = ctx.createImageData(0,0,16,8);
//fill them with RGBA values...
//then draw them
ctx.putImageData(imageArray,x,y);
The problem is that the images are overlapping and that putImageData simply... puts the data in the context, with no respect to the alpha channel as specified in the w3c:
pixels in the canvas are replaced wholesale, with no composition, alpha blending, no shadows, etc.
So I thought, well how can I use Images and not ImageDatas?
I tried to find a way to actually put the ImageData object back into an image but it appears it can only be put in a canvas context. So, as a last resort, I tried to use the toDataURL() method of a 16x8 canvas(the size of my images) and to stick the result as src of my ~600 images.
The result was beautiful, but was eating up 100% of my CPU...(which it did not with putImageData, ~5% cpu) My guess is that for some unknown reason the image is re-loaded from the image/png data URI each time it is drawn... but that would be plain weird... no? It also seems to take a lot more RAM than my previous technique.
So, as a result, I have no idea how to achieve my goal.
How can I dynamically create alpha-channelled images in javascript and then draw them at an appreciable speed on a canvas?
Is the only real alternative using a Java applet?
Thanks for your time.
Not knowing, what you really want to accomplish:
Did you have a look at the drawImage-method of the rendering-context?
Basically, it does the composition (as specified by the globalCompositeOperation-property) for you -- and it allows you to pass in a canvas element as the source.
So could probably do something along the lines of:
var offScreenContext = document.getCSSCanvasContext( "2d", "synthImage", width, height);
var pixelBuffer = offScreenContext.createImageData( tileWidth, tileHeight );
// do your image synthesis and put the updated buffer back into the context:
offScreenContext.putImageData( pixelBuffer, 0, 0, tileOriginX, tileOriginY, tileWidth, tileHeight );
// assuming 'ctx' is the context of the canvas that actually gets drawn on screen
ctx.drawImage(
offScreenContext.canvas, // => the synthesized image
tileOriginX, tileOriginY, tileWidth, tileHeight, // => frame of offScreenContext that get's drawn
originX, originY, tileWidth, tileHeight // => frame of ctx to draw in
);
Assuming that you have an animation you want to loop over, this has the added benefit of only having to generate the frames once into some kind of sprite-map so that in subsequent iterations you'll only ever need to call ctx.drawImage() -- at the expense of an increased memory footprint of course...
Why don't you use SVG?
If you have to use canvas, maybe you could implement drawing an image on a canvas yourself?
var red = oldred*(1-alpha)+imagered*alpha
...and so on...
getCSSCanvasContext seems to be WebKit only, but you could also create an offscreen canvas like this:
var canvas = document.createElement('canvas')
canvas.setAttribute('width',300);//use whatever you like for width and height
canvas.setAttribute('height',200);
Which you can then draw to and draw onto another canvas with the drawImage method.
I just started to work with canvas.
I need to simulate some image in pure canvas.
image => tool => [1, 20, 80, 45.....] => canvas => canvas render image
some picuture coordinates this picture but rendered(created) via canvas
Are there any tools that help to get image lines coordinates (to map)?
So, next, I could just use them, and get a pure canvas image.
If I understood your comment correctly, you either want to draw an image onto a canvas, or convert it to vector data and then draw that on the canvas.
Drawing an image on a canvas
This is by far the simplest solution. Converting raster images to vector data is a complicated process involving advanced algorithms, and still it's not perfect.
Rendering an image on a canvas is actually very simple:
// Get the canvas element on the page (<canvas id="canvas"> in the HTML)
var ctx = document.getElementById('canvas').getContext('2d');
// Create a new image object which will hold the image data that you want to
// render.
var img = new Image();
// Use the onload event to make the code in the function execute when the image
// has finished loading.
img.onload = function () {
// You can use all standard canvas operations, of course. In this case, the
// rotate function to rotate the image 45 degrees.
ctx.rotate(Math.PI / 4);
// Draw image at (0, 0)
ctx.drawImage(img, 0, 0);
}
// Tell the image object to load an image.
img.src = 'my_image.png';
Converting a raster image to vector data
This is a complicated process, so I won't give you the whole walkthrough. First of all, you can give up on trying to implement this yourself right now, because it requires a lot of work. However, there are applications and services that do this for you:
http://vectormagic.com/home
Works great, but you will have to pay for most of the functionality
How to convert SVG files to other image formats
A good list of applications that can do this for you
After this, you can store the vector data as SVG and use the SVG rendering that some browsers have, or a library such as SVGCanvas to render SVG onto a canvas. You can probably use that library to convert the resulting image to a list of context operations instead of converting from SVG every time.