How to create "image building" effect - javascript

I'm looking for a convinient way to create such an effect in a web application: i have some picture, which has not very high resolution, and i want it to appear as a cloud of particles in some random part of the screen, and then to move to it's position.
It's ok that i will lose some resolution (i don't think that 1x1px particles are nice ;) ).
I want to use silverlight/canvas or processing-js/canvas.
Any ideas?
Thx.

When your silverlight application loads the picture, what you can do is to split it into tiles. You use one object for each tile and you store the normal position for this tile (i.e. where it is originally in the picture). Then you give each tile a random position, and use a loop to move the tile in a line from the random position to its normal position. This appear as a cloud that resolves into the correct picture.
You can then play around with the size and number of tiles, and how they move to their correct position (you could have them slow down, or follow a curve instead of a straight line).

Related

FabricJS object coordinates relative to an image instead of whole canvas

I'm trying to create a small website using the FabricJS library, which adds additional features to the web canvas element.
My issue that I, however, have is that i want to resize the canvas (in red) so that it fills the whole webpage.
On this canvas, there is a background image (in green) where I'll create some drawings on (in orange, this could be lines, squares,...).
Now, I would like to export all drawings in a coordinate system relative to the image and not to the whole canvas, because it should be possible to freely move around and zoom in/out the image for an enhanced drawing experience.
My idea, on how to solve this, would be to calculate the image position relative to the canvas and subtract them from the drawings - but that includes a lot of calculation.. Maybe there is a more genius approach with FabricJS?
Moreover, how can i guarantee that my drawings move around and zoom in/out with the image, so that my drawings are always true to the image?
I've thought about this for days and came to the realization that i need input from the professionals.
I think toLocalPoint() might help. Given an object imageObj and absolute coordinates left and top of your drawing, you can find the relative coordinates like this:
const abs = new fabric.Point(left, top)
const rel = imageObj.toLocalPoint(point, 'left', 'top')
console.log(rel.x, rel.y)
As for your second question: there is no easy way to "tie" two objects together, other than grouping them - and I assume you don't want to group them. Therefore, you would need to listen to all the appropriate events emitted by one object and make the adjustments to the other object in their handlers. To find out what events make sense to listen to in your case, see the events demo.

Isometric rendering without tiles, is that goal reachable?

I'm creating a 2d game in HTML5 canvas. It's an isometric world, so actually it's also 3d.
You always see that isometric games use tiles, and I think the reason is just for the depth logic.
My goal is to create the game without using a tile system. Each item can be placed by the user, so item locations like walls, trees, etc., have variable positions. The positions are isometric x, y, z coordinates.
If the game was just tiled, you could determine a fixed tile area for each item. (I mean: a one-tile item, or a wall of 10 tiles long).
But in my game I use an areaX and areaY for the space an item uses on the ground. And I use a height to store an item's height, which is the z value. (z axiz in my world, is y axis on screen).
The problem is hard to explain. It's about depth sorting.
See the following image:
The brown bar on top of the other bar should be after the gray pole.
I'm now using the simplest form of a painter's algorithm, that only compares the x, y, z coords of each item.
I know this incorrect rendering is a famous problem of the painter's algorithm.
If this was a tiled game, the bars could be divided into 2 tiles next to each other. Then the tiles could be drawn in the order of their depth.
But since I'm trying to create it without tiles, I am looking for a really challenging logic.
The items should be rendered as if they were 3D objects. I would even like to have the following behavior: If multiple items would intersect, then the visible pixels of each item should be drawn, like in this image:
The main problem is that there is no information to determine what parts of an image should be visible, and how they must be cut.
I could create a depth mask for each image, like:
It works a little bit like a z-buffer.
But this is not possible due to the performance of a canvas, because you have to iterate literally over each pixel of each image in the map.
And the second big disadvantage is that you have to load twice as much resources from the server...
Another solution might be cutting all images into vertical strips of 1 pixel wide. Then handle each strip as if it's a tile of 1x1 pixel. Then I'm still creating a tiled game, buy the tiles would be so small that I still reach my goal. But also this solution has the disadvantage performance... Since each image would be split in hundreds of strips, which are new seperate images.
So I'm looking for a challenging solution. Who can help me finding a way to define the depths (or depth areas) for images in a way that correct rendering is possible for the performance of canvas?
This question was effectively asked again and answered over here
The short answer is you can use depth sprites with WebGL

How to store information to properly display multiple images that makeup a single game character/object/NPC?

Sorry for the confusing title, I wasn't too sure on how to word it.
I'm just getting into 2D game development after recently discovering the power of HTML5's Canvas element. I'm on my first basic project to learn the ropes. This game will allow players to join a game, and basically fight on a single map with tanks. The more kills they get, the more higher-tier tanks they can unlock and use.
But, I'm sort of stuck on how to draw the tanks into the game (properly, at least). Each tank has three images: the body of the tank, the turret, and a tank shell that will be drawn to the game when the tank fires. Of course, when the game loads a tank, it needs to know the correct location to put these images. All tanks have different size, therefore I simply cannot tell the game to load the body and turret in the same spots every time. When the body of the tank is drawn to the canvas, the turret of course needs to be drawn relative to this corresponding body.
So how should I store this info? Do I put a hardcoded object in the code that contains a list of turret offsets for each individual tank?
I hope I explained my problem clearly. Please ask if you have any questions. :)
One way to deal with multiple sized sprites is to define each sprite position using its centerpoint rather than as its usual top-left corner.
That way the size of each the tank is irrelevant when positioning it.
To implement centerpoint positioning, you can:
translate to the desired position on the map
and then drawImage with an offset of -width/2 and -height/2.
An example,
Assume your tankBase sprite is 38px wide and 59px high.
Then to draw the tankBase centered at x/y==[100,100] you can do this:
ctx.translate(100,100);
ctx.drawImage(tankBase,-38/2,-59/2);
Here's a Demo: http://jsfiddle.net/m1erickson/V9uEr/

Wraparound for HTML5 Canvas: How to Get (Static) Shapes to Continue around Edges of Canvas

I'm trying to draw a tiled background using Javascript on an HTML5 canvas, but it's not working because shapes that intersect the edges of the canvas don't wrap around to the other side. (Just to be clear: these are static shapes--no motion in time is involved.) How can I get objects interrupted by one side of the canvas to wrap around to the other side?
Basically I'm looking for the "wraparound" effect that many video games use--most famously Asteroids; I just want that effect for a static purpose here. This page seems to be an example that shows it is possible. Note how an asteroid, say, on the right edge of the screen (whether moving or not) continues over to the left edge. Or for that matter, an object in the corner is split between all four corners. Again, no motion is necessarily involved.
Anyone have any clues how I might be able to draw, say, a square or a line that wraps around the edges? Is there perhaps some sort of option for canvas or Javascript? My google searches using obvious keywords have come up empty.
Edit
To give a little more context, I'm basing my work off the example here: Canvas as Background Image. (Also linked from here: Use <canvas> as a CSS background.) Repeating the image is no problem. The problem is getting the truncated parts of shapes to wrap around to the other side.
I'm not sure how you have the tiles set-up, however, if they are all part of a single 'wrapper' slide which has it's own x,x at say 0,0, then you could actually just draw it twice, or generate a new slide as needed. Hopefully this code will better illustrate the concept.
// Here, the 'tilegroup' is the same size of the canvas
function renderbg() {
tiles.draw(tiles.posx, tiles.posy);
if(tiles.posx < 0)
tiles.draw(canvas.width + tiles.posx, tiles.posy);
if(tiles.posx > 0)
tiles.draw(-canvas.width + tiles.posx, tiles.posy);
}
So basically, the idea here is to draw the groupings of tiles twice. Once in it's actual position, and again to fill in the gap. You still need to calculate when the entire group leaves the canvas completely, and then reset it, but hopefully this leads you in the correct direction!
You could always create your tillable image in canvas, generate a toDataUrl(), and then assign that data url as a background to something and let CSS do the tiling.. just a thought.
Edit: If you're having trouble drawing a tillable image, you could create a 3*widthx3*width canvas, draw on it as regular (assuming you grab data from the center square of data as the final result), and then see if you can't draw from subsets of the canvas to itself. Looks like you'd have to use:
var myImageData = context.getImageData(left, top, width, height);
context.putImageData(myImageData, dx, dy);
(with appropriate measurements)
https://developer.mozilla.org/En/HTML/Canvas/Pixel_manipulation_with_canvas/
Edit II: The idea was that you'd have a canvas big enough that has a center area of interest, and buffer areas around it big enough to account for any of the shapes you may draw, like so:
XXX
XCX
XXX
You could draw the shapes once to this big canvas and then just blindly draw each of the areas X around that center area to the center area (and then clear those areas out for the next drawing). So, if K is the number of shapes instead of 4*K draws, you have K + 8 draws (and then 8 clears). Obviously the practical applicability of this depends on the number of shapes and overlapping concerns, although I bet it could be tweaked. Depending upon the complexity of your shapes it may make sense to draw a shape 4 times as you originally thought, or to draw to some buffer or buffer area and then draw it's pixel data 4 times or something. I'll admit, this is some idea that just popped into my head so I might be missing something.
Edit III: And really, you could be smart about it. If you know how a set of objects are going to overlap, you should only have to draw from the buffer once. Say you got a bunch of shapes in a row that only draw to the north overlapping region. All you should need to do is draw those shapes, and then draw the north overlapping region to the south side. The hairy regions would be the corners, but I don't think they really get hairy unless the shapes are large.... sigh.. at this point I probably need to quiet down and see if there's any existing implementations of what I speak out there because I'm not sure my writing off-the-cuff is helping anybody.

Can a canvas within a canvas easily be cleared from the main canvas?

I'm currently working on an interface where I have a primary canvas that is 800x800 in size. At the top I've generated a bunch of icons. When a user mouses over the icons at the top, it matches his mouse's x and y coordinates to determine if he is currently hovering over any of the icons. If he is, I want to have a hover effect where a label appears next to the mouse with the name of the icon. As he moves, the label follows the mouse. If he leaves the icon or moves to a different one, the last one is cleared, and either there is no label displayed (if the user moved off all icons), or another label is displayed next to the mouse in the last one's place (if he hovers over another icon, the width of the label is a variable length depending upon the width of the text).
The process of ordering and displaying these icons all occurs within a separate object from the rest of the canvas renderings, thus I wouldn't exactly want to re-render that entire object to display the icons every time a mousemove event triggers, so I'm wondering if there's a way to draw to another "temporary" canvas' context and whether or not that could be easily cleared. as the mouse moves so there isn't any trails left behind on the primary canvas? Can anyone point me in the direction of an example like this or advise me on how I should go about accomplishing this sort of task?
Yes you can certainly draw it onto a temporary (in-memory) canvas. This is done a lot of various reasons, and yours may be valid (especially if you don't have any background that changes). But it may not be the easiest to implement, its hard to say without knowing more about your app.
There's a decent alternative you should consider: you could have two canvases that are 800x800 in size overlaid atop each-other. This can be useful for some applications (like games) where there is a background, foreground, and middle-ground that all have different moving parts (but the background parts move rarely, and foreground isn't always present, etc)
In the same way, you could "layer" your canvas app, with the icons being on one canvas, and the background and other parts of the app being on the other canvas.

Categories