Javascript CANVAS is amazing: it allows us to draw something like lines, polygons on the browser screen.
I wonder how does Javascript CANVAS works. For example to draw a line, does it use a series aligned tiny images to simulate the line or some other approach?
Thanks in advance.
Any reasonable implementer would just use a bitmap (stored internally in the browser), and draw to that using OS native drawing commands.
Why does it matter? It's not at all related to HTML+CSS, if that's what you're wondering.
More detail, for detail's sake:
When the browser's HTML parser sees a canvas element (of a given width & height) it needs to allocate an onscreen pixmap to cover that area. It either does this manually (i.e. malloc()) or it calls into some OS native drawing API to create a surface to draw on. The OS native API could be Windows, Gtk, Kde, Qt, or any other drawing library that the implementer of the browser chose. Also, it's highly dependent on the operating system. Internet Explorer probably calls into some Windows native library (i.e. DirectX or WinFooBarMethod()).
Once the drawing surface is created, it's made accessible to the internal guts of the JavaScript interpreter, likely via a pointer or handle to the constructed drawing surface. Then, when the JS interpreter sees an invocation of one of the canvas methods, it turns this into a call to the appropriate OS native command.
So, using the Windows 3.1 style metaphor:
"new canvas(width, height)" = "WinCreatePixmap(width, height)"
"canvas.setPixel(x,y,color)" = "WinSetPixel(x,y,color)"
And using a manually managed pixmap:
"new canvas(width, height)" = "malloc(width * height * sizeof(Pixel))"
"canvas.setPixel(x,y,color)" = "canvas[x][y] = color;"
Again, it shouldn't matter to the JavaScript developer how these methods are implemented. The only people who need to care are the ones who are writing HTML5 compliant web browsers with canvas support.
If you know C++, you can go to the source.
For example, in Firefox, the "graphics context" object is implemented by the class nsCanvasRenderingContext2D. But that class doesn't actually modify the pixels directly. Instead, it asks a separate object, called Thebes, to do that. Thebes in turn delegates this work to a graphics library called Cairo, which typically asks a library provided by your operating system to do the actual pixel work. I imagine it's a similar story everywhere.
At the very bottom, the canvas has a two-dimensional array of pixels. Each pixel is a 32-bit integer. A pixel is set by assigning a value to an element of the array. Somewhere there's a bit of code that determines which pixels to paint and assigns the appropriate values to the appropriate array elements.
In theory, the pixels might be drawn by your video card, but I have heard that graphics cards generally can't be trusted to do 2D graphics, because the hardware is aggressively tuned for 3D gaming and trades away too much accuracy for speed.
If you're interested in how line drawing works, check out Bresenham's Line Drawing Algorithm.
Surely that's implementation-specific to the JavaScript engine browser in question?
You're thinking too much, it's simple:
A canvas is like an image that can be drawn on to the browser.
I think implementation is important. Why does it matter? Look at flash. When you use the drawing API to create complex fractal artwork it is actually creating vector artwork and making every line and curve a child of the object being drawn on, thus it rerenders the vector artwork every frame.. CRASH! or chug... chug........ chug..............
So for complex fractals or art that records equations, I have to use a Bitmap or the render engine CACKS. It DOES make a difference, since now I am trying to transfer some of my flash multimedia to Javascript and encountering differences among browsers.
Related
There are few JavaScript libraries allowing for rendering 2D graphics using WebGL. I have found out, that the most popular are three.js and pixi.js. Both of them allow you to use WebGL or canvas renderer (for devices dont supporting WebGL).
I want to ask you which of these libraries is better under the following termns:
I want it to use only with 2D graphics, so 3D support is completely optional.
The performance is very important - a lot of elements, text, ability to smoothly scale, translate them etc. is crutial.
The canvas renderer (when device does not support WebGl is important) and I would love to see the same (or very simmilar) result using both renderers.
If there is another library, that I should concider in this particullar situation, feel free to tell about it :)
I have the exact same use case and just tried both. Loading a lot of static sprites (from the same image) is faster in three.js for 5000 sprites and above, but animating only a few of those sprites give better framerates in pixi (again, for 5000 sprites). (This was tested on Chrome and IE9 on desktop)
The biggest difference was with the Canvas renderer, where pixi's autodetect gives the same results as the WebGL (if slower) for the same code, but three.js's Canvas renderer doesn't support the Sprite type meaning to achieve portable code you have to use Particles. If you don't use sprites all that much and mostly have quads or triangles, that wouldn't be an issue.
If availability of tutorials and such is at all an issue, three.js is more established, so there's more material.
Otherwise, for up to about 2-3k elements rendered at the same time, I'd go with pixi.
Worked a lot with pixi.js, and currently working with three.js.
pixi.js is based on AS3 API, that has been used for flash games. That mean that the API (structure) has been tested and tried since way for longer than three.js. (only used both for 2D)
three.js being more popular, it might be more up to date on some edge cases. For example it has more flexibility than pixi.js on adding a shader. pixi.js does not yet have the possibility to add a "raw" shader, without anything appended at the top of the shader code. That is important for specifying shader version "#version 300 es" (pixi.js use an older version by default).
I need to scale images in array form in a Web Worker. If I was outside a web worker I could use a canvas and drawImage to copy certain parts of an image or scale it.
Look like in a web worker I can't use a canvas so, what can I do? Is there any pure Javascript library that can help me?
Thanks a lot in advance.
Scaling can be done in various ways, but they all boil down to either removing or creating pixels from the image. Since images are essentially matrices (resized as arrays) of pixel values, you can look at scaling up images as enlarging that array and filling in the blanks and scaling down images as shrinking the array by leaving values out.
That being said, it is typically not that difficult to write your own scale function in JavaScript that works on arrays. Since I understand that you already have the images in the form of a JavaScript array, you can pass that array in a message to the Web Worker, scale it your scale function and send the scaled array back to the main thread.
In terms of representation I would advise you to use the Uint8ClampedArray which was designed for RGBA (color, with alpha channel) encoded images and is more efficient than normal JavaScript arrays. You can also easily send Uint8ClampedArray objects in messages to your Web Worker, so that won't be a problem. Another benefit is that a Uint8ClampedArray is used in the ImageData datatype (after replacing CanvasPixelArray) of the Canvas API. This means that it quite easy to draw your scaled image back on a Canvas (if that was what you wanted), simply by getting the current ImageData of the canvas' 2D context using ctx.getImageData() and changing its data attribute to your scaled Uint8ClampedArray object.
By the way, if you don't have your images as arrays yet you can use the same method. First draw the image on the canvas and then use the data attribute of the current ImageData object to retrieve the image in a Uint8ClampedArray.
Regarding scaling methods to upscale an image, there are basically two components that you need to implement. The first one is to divide the known pixels (i.e. the pixels from the image you are scaling) over the larger new array that you have created. An obvious way is to evenly divide all the pixels over the space. For example, if you are making the width of an image twice as wide, you want simply skip a position after each pixel leaving blanks in between.
The second component is then to fill in those blanks, which can be slightly less straightforward. However, there are several that are fairly easy. (On the other hand, if you have some knowledge of Computer Vision or Image Processing you might want to look at some more advanced methods.) An easy and somewhat obvious method is to interpolate each unknown pixel position using its nearest neighbor (i.e. the closest pixel value that is known) by duplicate the known pixel's color. This does typically result in the effect of bigger pixels (larger blocks of the same color) when you scale the images too much. Instead of duplicating the color of the closest pixel, you can also take the average of several known pixels that are nearby. Possibly even combined with weights were you make closer pixels count more in the average than pixels that are farther away. Other methods include blurring the image using Gaussians. If you want to find out what method is the best for your application, look at some pages about image interpolation. Of course, remember that scaling up always means filling in stuff that isn't really there. Which will always look bad if you do it too much.
As far as scaling down is concerned, one typically just removes pixels by transferring only a selection of pixels from the current array to the smaller array. For example if you would want to make the with of an image twice as small, you roughly iterate through the current array with steps of 2 (This depends a bit on the dimensions of the image, even or odd, and the representation that you are using). There are methods that do this even better by removing those pixels that could be missed the most. But I don't know enough about them.
By the way, all of this is practically unrelated to web workers. You would do it in exactly the same way if you wanted to scale images in JavaScript on the main thread. Or in any other language for that matter. Web Workers are however a very nice way to do these calculations on a separate thread instead of on the UI thread, which means that the website itself does not seem unresponsive. However, like you said, everything that involves the canvas element needs to be done on the main thread, but scaling arrays can be done anywhere.
Also, I'm sure there are JavaScript libraries that can do this for you and depending on their methods you can also load them in your Web Worker using importScripts. But I would say that in this case it might just be easier and a lot more fun to try to write it yourself and make it tailor-made for your purpose.
And depending on how advanced your programming skills are and the speed at which you need to scale you can always try to do this on the GPU instead of on the CPU using WebGL. But that does seem a slight overkill in this case. Also, you can try to chop your image in several pieces and try to scale the separate parts on several Web Workers making it multi-threaded. Although it is certainly not trivial to combine the parts later. Perhaps multi-threaded makes more sense when you have a lot of images that need to be scaled on the client side.
It all really depends on your application, the images and your own skills and desires.
Anyway, I hope that roughly answers your question.
I feel some specifics on mslatour's answer are needed, since I just spent 6 hours trying to figure out how to "…simply… change its data attribute to your scaled Uint8ClampedArray object". To do this:
① Send your array back from the web-worker. Use the form:
self.postMessage(bufferToReturn, [bufferToReturn]);
to pass your buffer to and from the web worker without making a copy of it, if you don't want to. (It's faster this way.) (There is some MDN documentation, but I can't link to it as I'm out of rep. Sorry.) Anyway, you can also put the first bufferToReturn inside lists or maps, like this:
self.postMessage({buffer:bufferToReturn, width:500, height:500}, [bufferToReturn]);
You use something like
webWorker.addEventListener('message', function(event) {your code here})
to listen for a posted message. (In this case, the events being posted are from the web worker and the event doing the listening is in your normal JS code. It works the same other way, just switch the 'self' and 'webWorker' variables around.)
② In your browser-side Javascript (as opposed to worker-side), you can use imageData.data.set() to "simply" change the data attribute and put it back in the canvas.
var imageData = context2d.createImageData(width, height);
imageData.data.set(new Uint8ClampedArray(bufferToReturn));
context2d.putImageData(imageData, x_offset, y_offset);
I would like to thank hacks.mozilla.org for alerting me to the existence of the data.set() method.
p.s. I don't know of any libraries to help with this… yet. Sorry.
I have yet to test it out myself, but there is a pure JS library that might be of use here:
https://github.com/taisel/JS-Image-Resizer
I'm investigating the possibility of producing a game using only HTML's canvas as the display media. To take an example task I need to do, I need to construct the game environment from a number of isometric tiles. Of course, working in 2D means they by necessity come in rectangular packages so there's a large overlap between tiles.
I'm old enough that the natural solution to this problem is to call BitBltMasked. Oh wait, no, an HTML canvas doesn't have something as simple and as pleasing as BitBlt. It seems that the only way to dump pixel data in to a canvas is either with drawImage() which has no useful drawing modes that ignore the alpha channel or to use ImageData objects that have the image data in an array.. to which every. access. is. bounds. checked. and. therefore. dog. slow.
OK, that's more of a rant than a question (things the W3C like tend to provoke that from me), but what I really want to know is how to draw fast to a canvas? I'm finding it very difficult to ditch the feeling that doing 100s of drawImages() a second where every draw respects the alpha channel is inherently sinful and likely to make my application perform like arse in many browsers. On the other hand, the only way to implement BitBlt proper relies heavily on a browser using a hotspot-like execution technique to make it run fast.
Is there any way to draw fast across every possible implementation, or do I just have to forget about performance?
This is a really interesting problem, and there's a few interesting things you can do to solve it.
First, you should know that drawImage can accept a Canvas, not just an image. The "sub-Canvas"es don't even need to be in the DOM. This means that you can do some compositing on one canvas, then draw it to another. This opens a whole world of optimization opportunities, especially in the context of isometric tiles.
Let's say you have an area that's 50 tiles long by 50 tiles wide (I'll say meters for the sake of my own sanity). You might divide the area into 10x10m chunks. Each chunk is represented by its own Canvas. To draw the full scene, you'd simply draw each of the chunks' Canvas objects to the main canvas that's shown to the user. If only four chunks (a 20x20m area), you would only perform four drawImage operations.
Of course, each of those individual chunks will need to render its own Canvas. On game ticks where nothing happens in the chunk, you simply don't do anything: the Canvas will remain unchanged and will be drawn as you'd expect. When something does change, you can do one of a few things depending on your game:
If your tiles extend into the third dimension (i.e.: you have a Z-axis), you can draw each "layer" of the chunk into its own Canvas and only update the layers that need to be updated. For example, if each chunk contains ten layers of depth, you'd have ten Canvas objects. If something on layer 6 was updated, you would only need to re-paint layer 6's Canvas (probably one drawImage per square meter, which would be 100), then perform one drawImage operation per layer in the chunk (ten) to re-draw the chunk's Canvas. Decreasing or increasing the chunk size may increase or decrease performance depending on the number of update you make to the environment in your game. Further optimizations can be made to eliminate drawImage calls for obscured tiles and the like.
If you don't have a third dimension, you can simply perform one drawImage per square meter of a chunk. If two chunks are updated, that's only 200 drawImage calls per tick (plus one call per chunk visible on the screen). If your game involves very few updates, decreasing the chunk size will decrease the number of calls even further.
You can perform updates to the chunks in their own game loop. If you're using requestAnimationFrame (as you should be), you only need to paint the chunk Canvas objects to the screen. Independently, you can perform game logic in a setTimeout loop or the like. Then, each chunk could be updated in its own tick between frames without affecting performance. This can also be done in a web worker using getImageData and putImageData to send the rendered chunk back to the main thread whenever it needs to be updated, though making this work seamlessly will take a good deal of effort.
The other option that you have is to use a library like pixi.js to render the scene using WebGL. Even for 2D, it will increase performance by decreasing the amount of work that the CPU needs to do and shifting that over to the GPU. I'd highly recommend checking it out.
I know that GameJS has blit operations, and I certainly assume any other html5 game libraries do as well (gameQuery, LimeJS, etc etc). I don't know if these packages have addressed the specific array-bounds-checking concern that you had, but in practice their samples seem to work plenty fast on all platforms.
You should not make assumptions about what speedups make sense. For example, the GameJS developer reports that he was going to implement dirty rectangle tracking but it turned out that modern browsers do this automatically---link.
For this reason and others, I suggest to get something working before thinking about the speed. Also, make use of drawing libraries, as the authors have presumably spent some time optimizing performance.
I have no personal knowledge about this, but you can look into the appMobi "direct canvas" HTML element which is allegedly a much faster version of normal canvas, link. I'm confused about whether this works in all browsers or just webkit browsers or just appMobi's own special browser.
Again, you should not make assumptions about what speedups make sense without a very deep knowledge of web browser internal processes. That webpage about "direct canvas" mentions a bunch of things that slow down canvas-drawing: "Reflowing text, mapping hot spots, creating indexes for reference links, on and on." Alpha-blending and array-bounds-checking are not mentioned as prominent causes of slowness!
Unfortunately, there's no way around the alpha composition overhead. Clipping may be one solution, but I doubt there would be much, if any, performance gain. Not to mention how complicated such a route would be to implement on irregular shapes.
When you have to draw the entire display, you're going to have to deal with the performance hit. Although afterwards, you have a whole screen's worth of pre-calculated alpha imagery and you can draw this image data at an offset in one drawImage call. Then, you would only have to individually draw the new tiles that are scrolled into view.
But still, the browser is having to redraw each pixel at a different location in the canvas. Which is quite expensive. It would be nice if there was a method for just scrolling pixels, but no luck there either.
One idea that comes to mind is that you could implement multiple canvases, translating each individual canvas instead of redrawing the pixels. This would allow the browser to decide how to redraw those pixels, in a more native way, at least in theory anyway. Then you could render the newly visible tiles on a new, or used/cached, canvas element. Positioning it to match up with the last screen render.
But that's just my two blits... I mean bits... duh, I mean cents :]
I'd like to be able to write an application in HTML5 that is similar to the following.
HTML5 Canvas Animals on the Beach Game with KineticJS
The problem with that demo though is the mouse over event is only accurate to the rectangle surrounding the animal. Is there any way to do this with more accuracy, be it in KinectJS or otherwise?
There are generally two ways:
Using custom paths with each image as hitboxes (that you manually define) then using an is-point-in-path algorithm
Using a ghost-canvas (or whatever you like to call it) as I detailed in this old tutorial. Ignore the link to the new tutorial, the old one uses what you'd want.
The first method here is much faster but requires a lot more code and manual work. The second method is pixel-perfect but much slower. Still, if you don't have an enormous amount of objects it may suit your needs.
I would like to know if there is existing equivalent methods in JavaScript (for HTML5 supported browsers) that can evaluate a bitmap-on-bitmap hitTest like Flash does.
Also, how could a "blur-filter" be achieved?
Can existing DIV / SPAN tags be "drawn" (like Flash's BitmapData.draw() method) into a bitmap object, so it can be used for "hitTest" purpose on the canvas?
I think I may have the HTML5 jargon all wrong here, but hopefully this makes somewhat some sense.
Are there any built-in methods to check if bitmaps touch eachother, at pixel-level evaluation?
Thanks!
No, there are no methods in HTML Canvas or Context to determine if two regions overlap. No tests for non-square regions of non-transparent pixels overlapping, no tests for transformed bounding boxes overlapping, no tests even for overlapping axis-aligned bounding boxes. Any such hit testing will need to be done by you or a higher-level API tracking individual bitmaps. The Canvas/Context is a non-retained low-level pixel blitting and drawing API.
No, you cannot serialize the rendering of HTML elements into a canvas image (other than using drawImage() to copy images and/or canvases). This includes attempting to capture content that is drawn underneath a transparent/semi-transparent canvas. There are security problems if this is allowed, and so it is not.
However, you can 'blur' content already drawn to the canvas by using getImageData() to get the raw pixels, then manipulating the pixel values, and then using putImageData() to push the modified pixels back to the canvas.
PlayMyCode has a great step-by-step explanation of a Javascript per-pixel collision detection function that works very much like Actionscript's hitTest.