I am using EaselJS as an API for HTML5 canvas.
I noticed that the following code:
line.graphics.setStrokeStyle(1).beginStroke("black").moveTo(100,100).lineTo(200,200);
stage.addChild(line);
...produces following line:
I set the thickness to 1 - but the line is still fuzzy. If you zoom in with the snapshot, you can see it actually occupies 3 pixels. I believe I read somewhere canvas draws a point between two pixels, so that both pixels will be colored in fact. And you need to shift where you draw the point by half the pixel width so it falls on the entire pixel.
I need sharp image for my applications, please advise.
EaselJS is just an abstraction for the canvas APIs - which draws all lines on the specified coordinates. The snapToPixel API is specifically for doing automatic rounding, but doesn't take into account the half-pixel issue you are describing.
The best practice approach is to put everything into a Container, and put the container at positive or negative (0.5,0.5) - which will adjust everything, and you can work in a normal coordinate space, rather than offsetting all your calculations.
Related
I have a world made up of randomly generated blocks (black being on, white being off). When zoomed out, it essentially looks like white noise. However, instead of each block being 1 pixel, they are 40 pixels and drawn as an image texture.
My game works in a camera basis, so you can only see a fraction of the map at a time and you must move the character around to explore the rest.
Currently, I have my game simply render each image (block texture) that is in range of the canvas. This results in drawing 80-100 images every single frame. While it works fine on a desktop computer, it doesn't do very well on mobile.
Considering the map look doesn't change throughout the game, I wanted to try a different approach. I created a canvas the size of the world, which ended up being 1600x24000 pixels large. I drew all textures onto an external, hidden canvas. This was done once upon initialization. Then I would use the clipping attributes in drawImage to take the subsection that I needed. While it worked, it was extremely laggy and made things very much worse than they were before. In addition, image quality dropped to a more blurred look, which is undesirable.
Now I'm looking for ways to better go about this. So my question is, how should I go about this? Thank you.
When you're using a huge canvas, you can't be sure the renderer won't load the whole texture to render even a part of it. Since you see a huge performance drop, that might well be happening.
A few things i would do :
• try only with fillRect to see how much drawImage is to blame.
• try to set-up once and for all the context then only use drawImage with its simplest flavor :
var topLeft = { col:12, row : 6 }; // shift of the left-most rect (indexes)
context.save();
context.scale( scale, scale);
for column = 0 to columnSeenCount
for row = 0 to rowSeenCount
image = the image of ( topLeft.col + column , topLeft.row + row )
context.drawImage( image, column, row) ;
context.restore();
this way you avoid to re-compute a transform matrix for every drawImage. Much less math involved for the renderer.
• if you do the drawImage by yourself, try to use only rounded coordinates, it will be faster.
• You must round also the scale to prevent artifacts. You can round on 1, but for the scale it might be too much a limit : you can easily 'round' to 0.5 or 0.25 or... by doing :
var precision = 2 ; // 0 => floor ; 1 => at 0.5 ; 2 => 0.25 ; ....
var factor = 1 << precision ;
var roundedFigure = Math.floor( figure * factor) / factor ;
• if the way your application is done makes it easy to draw tile type per tile type, do it and you might win some time (you'll benefit from the fact that image in cache ).
• After that your only resort will be to use webGL or a webGL based renderer...
Two more Ideas could increase your performance:
Check if your whole world is rendered, or just the visible images (on the stage). For example double the world size and see, if it impacts the performance. It shouldn't, if you only render the relevant images.
Use CocoonJS to compile your application. It promises to speed up your application speed by 10 times for mobile devices. But be aware that it implies some serious restrictions on your html around your canvas.
obsolete answer, which assumed that the problem is caused by zooming out too far:
In 3D graphics Mipmaps can be used to avoid this problem. Essentially smaller images (i.e. less pixel) are used, when the object is more distant to the camera.
Maybe you can find something appropriate if you google something like html5 canvas 2D Mipmaps. Or you could build a simple mipmapping algorithm yourself.
But before investing the work, try how performant this approach is, by simple changing all block images, with 1x1-pixel images. Maybe your performance problem is not caused by slow rendering, as you assume. Learn to use a profiler, if it doesn't solve the problem.
A couple of questions and thoughts:
I would ditto #GameAlchemist's tip that using the clipping version of drawImage is slower than "blitting" a separate tile image onto the canvas. Use separate images instead when you have such an overly large map image.
24000 pixels is too much width to contain in any 1 image.
It looks like you're panning horizontally. You could slice your 24000 pixel wide image into individual images of a more reasonable size. Each image might be 3X the screen width. Exchange the image when the user pans beyond the edge of the current image.
How many unique block image tiles are you using?
Perhaps reduce the number of unique tiles when you detect a mobile user. Then put each unique tile on a separate image or canvas.
Is your map largely 1 tile type (eg. white/off)?
If so, you could make 1 single image of a grid of enough white tiles to fill the entire canvas. Then add black tiles where necessary. This reduces your drawing to 1 white grid image plus any required black images.
Say I have a rectangle that is 100 x 100 and I have a canvas 1000 x 1000.
As long as the rectangle's x co-ordinate is no more than 999 and no less than -100, it is true to say that some portion of the rectangle will be visibly seen on the canvas. Same goes for the rectangle's y co-ordinate.
What I would like to know is that if the rectangle's x or y co-ordinate is set so that the rectangle will not be visible on the canvas, does the internal workings of the canvas api still draw the rectangle or does it auto optimize and realise by itself that the bitmap that will be drawn on the canvas will not be seen, so therefore it doesn't attempt to draw it.
When drawing to canvas the boundaries are checked for each draw. If a pixel ends up outside the canvas it is clipped (discarded).
If not you would get a memory corruption and very soon a crash.
Canvas is designed to be very safe so you won't have poorly written Javascripts (intentional or not) crashing your browser. The same applies to colors where color values (f.ex. using a bitmap array directly) are clamped to be within the valid range.
Optimization is dependent on the implementation, but it's reasonably to assume that if the area is completely outside the boundaries of the canvas, the draw operation is rejected in full. If it is partly inside it may start the internal block copy by moving start and end cursor to represent the effective area that would be rendered visible.
The other option is to check each pixel as it is rendered, if it's inside or outside the visible boundary. This however is not optimal.
To visualize, only the gray areas would be considered, the light-blue would be ignored:
(I didn't show all possibilities but it should be easy to imagine the bottom parts etc.)
Cursor here is where the internal routine will start and stop looping through the pixels. In this case if the area to be drawn is 100x100 pixels and is drawn at -50, -50 (x,y) then the internal cursor is set to +50, +50 relative to the area being drawn and the width and height is reduced likewise.
By moving the cursor and adjusting the width and height, it doesn't have to iterate through all the pixels and therefor optimizes the copy (although, it is not quite accurate to say "all pixels", as data is not copied per pixel but mainly on block basis related to memory alignment. There are separate algorithms that deals with optimized memory copying and takes into account offset bytes (bytes that does not start or end on a "clean" memory boundary) and so forth, ie. 4 or 8 bytes are copied in one go rather than one and one byte combined with masking (AND'ing the bits)).
The boundaries apply to lines and circles etc. as well. Their effective drawing area is handled as a square area, but there are different approaches to draw lines, circles than a square of pixels, to optimize further.
See f.ex. Bresenham algorithm for lines or mid-point circle algorithm for circles or various algorithms for ellipses - I don't the specific implementation in each browser, but for these you square of the start and end coords and in some cases (as with circles and ellipses) you may have to check as you go (perhaps drawing the circle in four parts and check the part which is overlapping the boundaries on a pixel-individual basis - this is implementation specific though).
When it comes to translation that is merely a recalculation of coordinates (translate, rotation using rotation matrixes and stuff like that). The new coordinates are then checked against boundary.
Now that being said it is not sure the browsers have their own specific implementation. They might use the system's native bitmap and clipping functionality instead where possible. However, the same described above applies in this case as well.
FWIW, On IE, Chrome and FF the fully offscreen draws (non-draws?) took about 100ms less than onscreen draws for 100,000 rects.
According to the canvas spec:
"When the destination rectangle is outside the destination image (the scratch bitmap), the pixels that land outside the scratch bitmap are discarded, as if the destination was an infinite canvas whose rendering was clipped to the dimensions of the scratch bitmap."
This is not absolutely specific to your question but it's likely all "out of canvas view" operations are handled this way. So based on that, I'd say Yes, they are "optimised".
I am attempting to use an html canvas element to draw each character available in a font file to a canvas. To make this question as simple as possible, pretend only one character is drawn to a canvas. From there, I want to use Javascript to analyze the canvas and create triangle regions of the canvas that make up the entire character. The reason I need it in triangles is so that the data can later be sent to WebGL so text can be rendered and data will not be lost be scaling the text size up or down.
I am looking for some sort of algorithm to accomplish this or at least some knowledge to get me going in the right direction. If you believe I should use a different approach please tell me why, but I figured this would be the best to provide a way to modify text in many ways as well as make it possible to create 3d block text.
Here's an article on how to draw resolution independent curves with shaders
http://research.microsoft.com/en-us/um/people/cloop/loopblinn05.pdf
My understanding is instead of breaking the shapes into triangles you break them into quads with enough info sorted in the vertices to draw a portion of the curve inside each quad. In other words, as the shader draws each quad there's a formula that for each pixel can compute if that pixel is inside the curve or outside the curve.
I suggest you to start with the keyword Polygon Triangulation.
Using this methods, you can split n-Polygons into triangles like this:
These methods may only apply to figures with real (and not rounded) edges.
So, you are trying to convert a raster image into vector data?
When zoomed in, that will result in very jagged looking geometry.
Since each pixel is being treated as a square edged part of the geometry.
Couldn't you get your hands on the original vector (bezier curve) geometry for each glyph you are drawing?
Transforming that into triangle strips and fans would look smoother.
I'm trying to draw a tiled background using Javascript on an HTML5 canvas, but it's not working because shapes that intersect the edges of the canvas don't wrap around to the other side. (Just to be clear: these are static shapes--no motion in time is involved.) How can I get objects interrupted by one side of the canvas to wrap around to the other side?
Basically I'm looking for the "wraparound" effect that many video games use--most famously Asteroids; I just want that effect for a static purpose here. This page seems to be an example that shows it is possible. Note how an asteroid, say, on the right edge of the screen (whether moving or not) continues over to the left edge. Or for that matter, an object in the corner is split between all four corners. Again, no motion is necessarily involved.
Anyone have any clues how I might be able to draw, say, a square or a line that wraps around the edges? Is there perhaps some sort of option for canvas or Javascript? My google searches using obvious keywords have come up empty.
Edit
To give a little more context, I'm basing my work off the example here: Canvas as Background Image. (Also linked from here: Use <canvas> as a CSS background.) Repeating the image is no problem. The problem is getting the truncated parts of shapes to wrap around to the other side.
I'm not sure how you have the tiles set-up, however, if they are all part of a single 'wrapper' slide which has it's own x,x at say 0,0, then you could actually just draw it twice, or generate a new slide as needed. Hopefully this code will better illustrate the concept.
// Here, the 'tilegroup' is the same size of the canvas
function renderbg() {
tiles.draw(tiles.posx, tiles.posy);
if(tiles.posx < 0)
tiles.draw(canvas.width + tiles.posx, tiles.posy);
if(tiles.posx > 0)
tiles.draw(-canvas.width + tiles.posx, tiles.posy);
}
So basically, the idea here is to draw the groupings of tiles twice. Once in it's actual position, and again to fill in the gap. You still need to calculate when the entire group leaves the canvas completely, and then reset it, but hopefully this leads you in the correct direction!
You could always create your tillable image in canvas, generate a toDataUrl(), and then assign that data url as a background to something and let CSS do the tiling.. just a thought.
Edit: If you're having trouble drawing a tillable image, you could create a 3*widthx3*width canvas, draw on it as regular (assuming you grab data from the center square of data as the final result), and then see if you can't draw from subsets of the canvas to itself. Looks like you'd have to use:
var myImageData = context.getImageData(left, top, width, height);
context.putImageData(myImageData, dx, dy);
(with appropriate measurements)
https://developer.mozilla.org/En/HTML/Canvas/Pixel_manipulation_with_canvas/
Edit II: The idea was that you'd have a canvas big enough that has a center area of interest, and buffer areas around it big enough to account for any of the shapes you may draw, like so:
XXX
XCX
XXX
You could draw the shapes once to this big canvas and then just blindly draw each of the areas X around that center area to the center area (and then clear those areas out for the next drawing). So, if K is the number of shapes instead of 4*K draws, you have K + 8 draws (and then 8 clears). Obviously the practical applicability of this depends on the number of shapes and overlapping concerns, although I bet it could be tweaked. Depending upon the complexity of your shapes it may make sense to draw a shape 4 times as you originally thought, or to draw to some buffer or buffer area and then draw it's pixel data 4 times or something. I'll admit, this is some idea that just popped into my head so I might be missing something.
Edit III: And really, you could be smart about it. If you know how a set of objects are going to overlap, you should only have to draw from the buffer once. Say you got a bunch of shapes in a row that only draw to the north overlapping region. All you should need to do is draw those shapes, and then draw the north overlapping region to the south side. The hairy regions would be the corners, but I don't think they really get hairy unless the shapes are large.... sigh.. at this point I probably need to quiet down and see if there's any existing implementations of what I speak out there because I'm not sure my writing off-the-cuff is helping anybody.
I'm trying to place a circle at 50% of the width of the paper using RaphaelJS, is this possible without first doing the math (.5 * pixel width)? I want to simply be able to place an element at 50% of its container's width, is this even possible with the current Raphael API?
Raphael claims to be able to draw vector graphics, and yet it seems everything in the API is pixel-based. How can you draw a vector image using pixels? That seems 100% contradictory.
Likewise, as I understand vector art, it retains the same dimensions regardless of actual size. Is this not one of the primary reasons to use vector graphics, that it doesn't matter if it's for screen, print or whatever, it will always be the same scale? Thus, I'm further
confused by the need for something like ScaleRaphael; just seems like such functionality is part and parcel to creating vector graphics. But, perhaps I just don't understand vector graphics?
It just doesn't seem like an image that is created with absolute pixel dimensions and unable to be resized natively qualifies as a vector image. That, or I'm missing a very large chunk of the API here.
Thanks in advance for any help. I've attempted to post this twice now to the RaphaelJS Google Group, but I guess they are censoring it for whatever reason because I've been waiting for it to appear since last week and still no sign of my posts (although other new posts are showing up).
Using pixel values to define shape positions/dimensions does not make it a non-vector shape. Take for instance Adobe Illustrator - this is a vector product and yet you can still see that the properties for each object shows the positions and dimensions is pixels.
A basic explanation of vector graphics would be like this, taking a rectangle as an example:
A vector rectangle will have a number of properties such as x, y,
width and height. These properties can be specified in pixels. The
difference with vector (as opposed to raster) is that these pixel
properties are only used to determine how the shape is drawn. So when
the display is rendered (refreshed) the "system" can redrawn the shape
using the same properties without effecting the quality of the resize.
A raster image however will hold a lot more information (i.e. the
exact value of each pixel used to form the shape/rectangle)
If the word "pixel" makes you think it is contradictory, just remeber everything on a computer screen is rendered in pixels. Even vector graphics have to be converted to "raster" as some point in the graphics pipeline.
If you are worried about having to use a calculation (0.5 * width) then just remember that something has to do that calculation, and personally I would happily handle this simple step myself.
After all that, you should just calculate size and position in pixels based on the size of your "paper" element and feed those values in Raphael for creating the shape.