Example: https://jsfiddle.net/jm1y9c0L/1/
Code:
const context = document.getElementById("canvas").getContext("2d");
const scale = document.getElementById("scale");
const translate = document.getElementById("translate");
scale.value = 17.78598548284369;
translate.value = 10.02842190048295;
function draw() {
context.fillStyle="#f00";
context.fillRect(0,0,1,1);
context.fillRect(1,0,1,1);
}
function update() {
const s = Number(scale.value);
const t = Number(translate.value);
context.clearRect(0,0,100,100);
context.save();
context.translate(t,t);
context.scale(s,s);
draw();
context.restore();
}
update();
Question: How do I draw tiles on canvas in a scaled context on canvas without blurry gaps?
A few things to note:
If I don't change the background to black, things look ok:
https://jsfiddle.net/jm1y9c0L/2/
I think if I draw all tiles on a buffer and draw the buffer on the canvas scaled, it would work. But that has many issues, one being it potentially uses a lot of memory if I'm scaling a very large buffer smaller.
Edit:
To explain my goal better, I want to draw two rectangles next to each other with its boundaries being a fraction. So if I draw two rectangles, one in green and one in red with a black background. I want to see the boundary being half green, half red, but not black at all.
It is because you are using floats and not integers, canvas cant draw 0.78598548284369 of a pixel, I would recommend putting Math.floor() around your scale in the scale() function:
context.scale(Math.floor(s),Math.floor(s));
Hope this helps :D
You can't do subpixel rendering in a browser (at least you can't expect consistent results). In your case, the end of one rectangle isn't necesarily the start of the other, since there may or may not be a pixel-wide gap which comes from rounding.
You might even notice some weird behaviour, such as getting different results in different screen positions, different browsers, when resizing, etc.
The solution is to calculate the value of each pixel yourself, not let the browser do it for you.
Related
I have an image that I am mapping to a PlaneGeometry using the TextureLoader#load() — it renders nicely as long as my camera is not looking "along" the plane, in which case it (of course) is of zero-width, and disappears from view.
I'm trying to figure out how to give the image some depth, so my current ideas are:
Generate a ParticleSystem where the particle at (x, y) shares its color with the (greyscale) pixel in the image at (x, y) — effectively, making each pixel a camera-facing billboard
"Cube-ifying" each pixel in the image so it takes up some space in the z direction. So a pixel at (x, y) becomes a cube at (x, y, z) with some size in all three dimensions.
Both allow me to look 'along' the edge of the image and see something, whereas the plane is invisible.
My first question is, is this possible, and second, what is the name for this voxelization technique? Is it natively supported in three.js?
Very happy to provide more info or some basic code if that's at all helpful.
It is definite possible.
The idea with the cubes should be doable in every decent 3d-engine.
Its just a matter of cloning / instancing cubes, and than position and color them accordingly.
Not sure if there is a name for this technique.
Also not sure about particle-systems in threejs, but i suppose that would be a option too.
My real answer is this: Use Heightmaps
They kind of do exactly what you want. A heightmap is a greyscale texture, and for each verticle of the geometry it is mapped on, it will offset the y-component of the verticles positions according to the grayscale (white = no offset, black = full offset)
http://danni-three.blogspot.de/2013/09/threejs-heightmaps.html
I have a script which takes a picture and creates black and white R, G and B channels. And I would like to count how many channels have something on them (black = no colour, white = colour data).
It works by going through every single pixel and saving data in 3 different canvas elements. Each one showing different channel.
It works great but when I give a Red and Blue picture only, the green channel (obviously) is completely black.
So my question is, how do I check whether the canvas is completely black? I would like to avoid looping through every pixel in every channel to see if it's all black.
cheers
Ok I've tested a few things and the easiest solution I could come up with was creating a 1x1 offscreen canvas, then drawing either your image or your original canvas you want to check for data on it, scaled down to 1 pixel and check it (I also added basic caching).
It's not a precise check, and the result depends on how the pixel data is distributed in the original image, so as GameAlchemist noted, you may loop through all the pixels anyway if the result of the check is 0 if you want accurate results.
function quickCheck(img){
var i = arguments.callee; //caching the canvas&context in the function
if(!i.canvas){i.canvas = document.createElement("canvas")}
if(!i.ctx) {i.ctx = i.canvas.getContext("2d")}
i.canvas.width=i.canvas.height = 1;
i.ctx.drawImage(img,0,0,1,1);
var result = i.ctx.getImageData(0,0,1,1);
return result.data[0]+result.data[1]+result.data[2]+result.data[3];
}
You must iterate the canvas's pixel data--no magic method to avoid that.
The performance cost of iterating through 2 images is trivial (2 images = your front image and back image).
[ Removed un-necessary part of answer after visiting questioner's site ]
Another quick check would be to calculate the sum of the image data. If the sum is equal to the amount of pixels in the canvas times 255, it is probably all black. There is a small chance that some other combination of pixels could cause the sum to be equal to the same number, but the chance is very low (unless you are looping through all the canvas combinations).
I am using the Three.JS library to display a point cloud in a web brower. The point cloud is generated once at start up and no further points are added or removed. But it does need to be rotated, panned and zoomed. I've gone through the tutorial about creating particles in three.js here
Using the example I can create particles that are squares or use an image of a sphere to create a texture. The image is closer to what I want, but is it possible to generate the point clouds without using the image? The sphere geometry for example.
The problem with the image is that when you have thousands of points it seems they sometimes obscure each other around the edges. From what I can gather it seems like the black region in a point's png file blocks the image immediately behind the current point. (But it is transparent to points further behind)
This obscuring of the images is the reason I would like to generate the points using shapes. I have tried replacing particles = new THREE.Geometry() with THREE.SphereGeometry(radius, segments, rings) and tried to change the vertices to spheres.
So my question is. How do I modify the example code so that it renders spheres (or points) instead of squares? Also, is a particle system the most efficient system for my particular case or should I just generate the particles and set their individual positions? As I mentioned I only generate the points once, but then rotate, zoom, pan the points. (I used the TrackBall sample code to get the mouse events working).
Thanks for your help
I don't think rendering a point cloud with spheres is very efficient. You should be able to get away with a particle system and use a texture or a small canvas program to draw a circle.
One of the first three.js sample uses a canvas program, here are the important bits:
var PI2 = Math.PI * 2;
var program = function ( context )
{
context.beginPath();
context.arc( 0, 0, 1, 0, PI2, true );
context.closePath();
context.fill();
};
var particle = new THREE.Particle( new THREE.ParticleCanvasMaterial( {
color: Math.random() * 0x808008 + 0x808080,
program: program
} ) );
Feel free to adapt the code for the WebGL renderer.
Another clever solution I've seen in the examples is using an encoded webm video to store the data and pass that to a GLSL shader which is rendered through a particle system in three.js
If your point cloud comes from a Kinect, these resources might be useful:
DepthCam
KinectJS
When comparing my code to http://threejs.org/examples/#webgl_custom_attributes_particles3
I saw the only difference was:
vec4 outColor = texture2D( texture, gl_PointCoord );
if ( outColor.a < 0.5 ) discard;
gl_FragColor = outColor;
Added to the fragment shader, fixed this problem for me.
It wasn't z fighting because randomly, some corners would overlap distant particles.
material.alphaTest = 0.5 didn't work and turning off depth writes/tests messed up the viewing order.
The problem with the image is that when you have thousands of points
it seems they sometimes obscure each other around the edges. From what
I can gather it seems like the black region in a point's png file
blocks the image immediately behind the current point. (But it is
transparent to points further behind)
You can get rid of the transparency overlapping problem of the underlying square structure by turning
depthTest:false
The problem then is, if you are adding additional objects to the scene the depth-testing will fail and the PointCloud will be rendered in front of the other objects, ignoring the actual order. To get around that you can additionally turn off
depthWrite:false
I'm currently trying to create a page with dynamically generated images, which are not shapes, drawn into a canvas to create an animation.
The first thing I tried was the following:
//create plenty of those:
var imageArray = ctx.createImageData(0,0,16,8);
//fill them with RGBA values...
//then draw them
ctx.putImageData(imageArray,x,y);
The problem is that the images are overlapping and that putImageData simply... puts the data in the context, with no respect to the alpha channel as specified in the w3c:
pixels in the canvas are replaced wholesale, with no composition, alpha blending, no shadows, etc.
So I thought, well how can I use Images and not ImageDatas?
I tried to find a way to actually put the ImageData object back into an image but it appears it can only be put in a canvas context. So, as a last resort, I tried to use the toDataURL() method of a 16x8 canvas(the size of my images) and to stick the result as src of my ~600 images.
The result was beautiful, but was eating up 100% of my CPU...(which it did not with putImageData, ~5% cpu) My guess is that for some unknown reason the image is re-loaded from the image/png data URI each time it is drawn... but that would be plain weird... no? It also seems to take a lot more RAM than my previous technique.
So, as a result, I have no idea how to achieve my goal.
How can I dynamically create alpha-channelled images in javascript and then draw them at an appreciable speed on a canvas?
Is the only real alternative using a Java applet?
Thanks for your time.
Not knowing, what you really want to accomplish:
Did you have a look at the drawImage-method of the rendering-context?
Basically, it does the composition (as specified by the globalCompositeOperation-property) for you -- and it allows you to pass in a canvas element as the source.
So could probably do something along the lines of:
var offScreenContext = document.getCSSCanvasContext( "2d", "synthImage", width, height);
var pixelBuffer = offScreenContext.createImageData( tileWidth, tileHeight );
// do your image synthesis and put the updated buffer back into the context:
offScreenContext.putImageData( pixelBuffer, 0, 0, tileOriginX, tileOriginY, tileWidth, tileHeight );
// assuming 'ctx' is the context of the canvas that actually gets drawn on screen
ctx.drawImage(
offScreenContext.canvas, // => the synthesized image
tileOriginX, tileOriginY, tileWidth, tileHeight, // => frame of offScreenContext that get's drawn
originX, originY, tileWidth, tileHeight // => frame of ctx to draw in
);
Assuming that you have an animation you want to loop over, this has the added benefit of only having to generate the frames once into some kind of sprite-map so that in subsequent iterations you'll only ever need to call ctx.drawImage() -- at the expense of an increased memory footprint of course...
Why don't you use SVG?
If you have to use canvas, maybe you could implement drawing an image on a canvas yourself?
var red = oldred*(1-alpha)+imagered*alpha
...and so on...
getCSSCanvasContext seems to be WebKit only, but you could also create an offscreen canvas like this:
var canvas = document.createElement('canvas')
canvas.setAttribute('width',300);//use whatever you like for width and height
canvas.setAttribute('height',200);
Which you can then draw to and draw onto another canvas with the drawImage method.
I want to check a collision between two Sprites in HTML5 canvas. So for the sake of the discussion, let's assume that both sprites are IMG objects and a collision means that the alpha channel is not 0. Now both of these sprites can have a rotation around the object's center but no other transformation in case this makes this any easier.
Now the obvious solution I came up with would be this:
calculate the transformation matrix for both
figure out a rough estimation of the area where the code should test (like offset of both + calculated extra space for the rotation)
for all the pixels in the intersecting rectangle, transform the coordinate and test the image at the calculated position (rounded to nearest neighbor) for the alpha channel. Then abort on first hit.
The problem I see with that is that a) there are no matrix classes in JavaScript which means I have to do that in JavaScript which could be quite slow, I have to test for collisions every frame which makes this pretty expensive. Furthermore I have to replicate something I already have to do on drawing (or what canvas does for me, setting up the matrices).
I wonder if I'm missing anything here and if there is an easier solution for collision detection.
I'm not a javascript coder but I'd imagine the same optimisation tricks work just as well for Javascript as they do for C++.
Just rotate the corners of the sprite instead of every pixel. Effectively you would be doing something like software texture mapping. You could work out the x,y position of a given pixel using various gradient information. Look up software texture mapping for more info.
If you quadtree decomposed the sprite into "hit" and "non-hit" areas then you could effectively check to see if a given quad tree decomposition is all "non-hit", "all hit" or "possible hit" (ie contains hits and non-hit pixels. The first 2 are trivial to pass through. In the last case you then go down to the next decomposition level and repeat the test. This way you only check the pixels you need too and for large areas of "non-hit" and "hit" you don't have to do such a complex set of checks.
Anyway thats just a couple of thoughts.
I have to replicate something I already have to do on drawing
Well, you could make a new rendering context, plot one rotated white-background mask to it, set the compositing operation to lighter and plot the other rotated mask on top at the given offset.
Now if there's a non-white pixel left, there's a hit. You'd still have to getImageData and sift through the pixels to find that out. You might be able to reduce that workload a bit by scaling the resultant image downwards (relying on anti-aliasing to keep some pixels non-white), but I'm thinking it's probably still going to be quite slow.
I have to test for collisions every frame which makes this pretty expensive.
Yeah, I think realistically you're going to be using precalculated collision tables. If you've got space for it, you could store one hit/no hit bit for every combination of sprite a, sprite b, relative rotation, relative-x-normalised-to-rotation and relative-y-normalised-to-rotation. Depending on how many sprites you have and how many steps of rotation or movement, this could get rather large.
A compromise would be to store the pre-rotated masks of each sprite in a JavaScript array (of Number, giving you 32 bits/pixels of easily &&-able data, or as a character in a Sring, giving you 16 bits) and && each line of intersecting sprite masks together.
Or, give up on pixels and start looking at eg. paths.
Same problem, an alternative solution. First I use getImageData data to find a polygon that surrounds the sprite. Careful here because the implementation works with images with transparent background that have a single solid object. Like a ship. The next step is Ramer Douglas Peucker Algorithm to reduce the number of vertices in the polygon. I finally get a polygon of very few vertices easy and cheap to rotate and check collisions with the other polygons for each sprite.
http://jsfiddle.net/rnrlabs/9dxSg/
var canvas = document.getElementById("canvas");
var context = canvas.getContext("2d");
var img = document.getElementById("img");
context.drawImage(img, 0,0);
var dat = context.getImageData(0,0,img.width, img.height);
// see jsfiddle
var startPixel = findStartPixel(dat, 0);
var path = followPath(startPixel, dat, 0);
// 4 is RDP epsilon
map1 = properRDP(path.map, 4, path.startpixel.x, path.startpixel.y);
// draw
context.beginPath();
context.moveTo(path.startpixel.x, path.startpixel.x);
for(var i = 0; i < map.length; i++) {
var p = map[i];
context.lineTo(p.x, p.y);
}
context.strokeStyle = 'red';
context.closePath();
context.stroke();