I'm building a game in HTML5. I have hundreds of images that look similar to this:
And at certain points in the game I want to draw an outline around them. Like this:
I want to do different tracing colors at different times and don't want to end up with [number of sprites] * [number of colors] of additional images for memory and bandwidth reasons so I'm looking at vector drawings.
What I need to come to a solution on are really two separate things:
Calculate a vector path for each frame in a spritesheet. This can be either dynamically or ahead of time and stored.
Draw the vector path
The engine I'm using for the game is ImpactJS. It doesn't have any support for vector operations. The author of the game engine I'm using did his own vector drawing manually by exporting the vectors from Illustrator for a particular image, then using an online converter tool to change them to HTML5 syntax. This isn't the best method for hundreds of images so I thought I'd see what information others of you have.
I would like to still draw the images using ImpactJS since this game is already pretty far along, and just do a second-pass drawing of the outline of the image when necessary.
Thank you for any help!
Related
I'm working on a WebGL application that works similarly to video compositing programs like After Effects.
The application takes textures and stacks them like layers. This can be as simple as drawing one texture on top of the other or using common blend modes like the screen to combine several layers.
Each layer can be individually scaled/rotated/translated via an API call, and altogether the application forms a basic compositing software.
Now my question, doing all this in a single WebGL canvas is a lot to keep track of.
// Start webgl rant
The application won't know how many layers there are ahead of time, and so textures coordinate planes and shaders will need to be made dynamically.
Blending the layers together would require writing out the math for each type of blend mode. Transforming vertex coordinates requires matrix math and just doing things in WebGL, in general, requires lots of code being a low-level API.
// End webgl rant
However, this could be easily solved by making a new canvas element for each layer. Each WebGL canvas will have a texture drawn onto it, and then I can scale/move/blend the layers using simple CSS code.
At the surface, it seems like the performance hit won't be that bad, because even if I did combine everything into a single context each layer would still need its own texture and coordinate system. So the number of textures and coordinates stays the same, just spread across multiple contexts.
However deep inside I know somehow this is horribly wrong, and my computer's going to catch fire if I even try. I just can't figure out why.
With a goal of being able to support, ~4 layers at a time would using multiple canvases be a valid option? Besides worrying about browsers having a max number of active WebGL context's are there any other limitations to be aware of?
So I found out that texturing planets can be really hard. I created a 4096k image and wrapped it around a high poly sphere. Apart from the possible memory management performance issue that comes with a 3-4 mb image, the texture looks bad / pixelated on a close up (orbital) view.
I was thinking that I could maybe increase the resolution significantly by splitting up the picture. Then create a low, medium and high version of each section. If the camera viewport is very close to that particular section then render the high resolution image. If far away remove image from memory and apply low or medium version.
To be honest I am not sure what strategy to use to render high quality planets. Should I maybe avoid textures and just use height maps and color the planet with Javascript? Same thing for the clouds. Should I create a sphere with an alpha map or should I use shaders?
As you can see this is the issue im having and hopefully you could enlighten me. Performance with Webgl / three.js has significantly improved over time but since this is all done within the browser I assume thinking about the right solution is vital in the long term.
You're going to need to implement a lod system. lod = "level of detail" and in 3d it means generally means switching from high-polygon to low-polygon models but in general it means doing anything to switch high detail to low-detail
Because you can't make textures 1000000x100000 which is pretty much what you'd need to do to get the results you want you'll need build a sphere out of multiple sections and texture each one separately. How many sections depends on how close you want to be able to zoom in. Google Maps has millions of sections. At the same time, if you can zoom out to see the whole planet (like you can in Google Maps) you can't draw millions of sections. Instead you'd switch to a single sphere. That process is called "LODing"
There is no "generic" solution. You generally need to write your own for your specific case.
In the case of something like Google Maps what they most likely do is have several levels of detail. A single sphere when you can see the whole planet. A sphere made of say 100 pieces when slightly closer. A sphere made of 1000 pieces when closer, A sphere made of 10000 pieces when closer, etc etc. They also only show the pieces you can see. Deciding and managing which pieces to show with a generic solution would be way to slow (look at millions of pieces every frame) but you, as the application writer know what pieces are visible so you can make sure only those pieces are in your scene.
Another thing that people often do is fade between LODs. So when Google Maps is showing the single mesh sphere when all the say zoomed out and they transition to the 100 piece or 1000 piece sphere they crossfade between the two.
Some examples of lodding
http://acko.net/blog/making-worlds-1-of-spheres-and-cubes/
http://vterrain.org/LOD/spherical.html
You could create a sphere with different topology.
Say you create 6 square planes, arranged in such a way that they form a box. You can tesselate these planes to give the sphere enough resolution. These planes would have UV mapping that would work similar to cube-mapping, each will hold a cubemap face.
Then you loop through all the vertices, take the position vector and normalize it. This will yield a sphere.
You can convert an equirectangular panorama image into a cubemap. I think it will allow you to to get more resolution and less stretching for cheap.
For starters, the 4096 x 4096 should be 4096x2048 on the default sphere with equirectangular, but the newly mapped sphere can hold 6 x 4096 x 4096 with no stretching, and can be drawn in 6 draw calls.
Further splitting these could yield a good basis for what gman suggests.
I am currently working on a map generator application based on javascript, and I have wrote more than 400 lines of code, that creates a hexagonal map, adds coordinates to tiles, adds textures on tiles like grass, ocean and elements like castles, units etc.
I have added quite a few useful functions to this offline map editor, like zoom in and zoom out, turning grid on/off, dragging the map, and a few others, and I'm currently studying on how to add save and load functionality to this offline game map editor.
It sort of looks like a paint application, except that instated of drawing pixels, you use it to draw a map with hex tiles. You simply click on Generate a new map and you give your desired map size (e.g 64 tiles width by 64 tiles height) and the map is drawn for you, the tiles are simple divs that have the relative background image as texture. Tiles are drawn one by one using a simple for loop. But as the code grows in size so does my worries.
Because the map I create on my own map editor will be used on an online multiplayer game, it will be huge! for example to support at least 20000 users on the upcoming game there should be at least 20000 tiles, only for the users to occupy, not to mention the territory they will own, mountains, jungles, barbarian tribes, and so on..
I have made the calculations and found out that a 512 by 512 (about 262000 tiles) map will sufficiently answer the needs of that many users. However, the map will be huge. so I decided to test and see how much load time does it take to make such a map using the codes I have created with the least process possible and I found out that it takes nearly a minute or more, which is not acceptable, from a gamers perspective.
A zoom in for example in such a huge map will mean looping through every 262000 tile to change their size. although the process takes less time than drawing/loading the map from scratch, but it is still slow.
I was thinking with a map that huge which won't even fit in a browser's window, why should I draw the entire map? why not instead load the part which the user is currently looking at. Loading/drawing only the part that is needed, this way reducing load time and increasing performance. But this is proving to be a real challenge, and there are very limited resources online about implementing such a functionality. Where to start? How to approach the problem and respective solution?
I would start out by separating your concerns a little more. You're able to view WxH number of pixels, and the top left of the user's screen sits at (x,y) coordinates.
Loading the entire map, as you have pointed out, is crazy. But by knowing how large the game world is, and by knowing the user's coordinates in that world, you can easily select the subset of items that are in view.
Keep in mind that at a zoomed out resolution you shouldn't be using the full-sized images. Loading 262000 images (for just the map!) is going to be too heavy and probably crash. You should have different images for different zoom levels. This is a much bigger question and you should buy a book and do more research on google. But at least the thinking about the "where the user is" vs "where the items in the world are" is a place that I would start at.
Hope that helps.
First off I'm not used to dealing with images, so if my wording is off please excuse me.
I am looking to take an image that is dropped onto a HTML5 canvas, sample it, do a reduction of the sampling, then create a polygonal representation of the image using mostly triangles with a few other polygons and draw that image to the canvas.
But I do not know where to start with an algorithm to do so. What sort of pseudocode do I need for this kind of algorithm?
This image may offer a better understanding of the end result:
I would do the following:
Create a field of randomly-placed dots.
Create a Voronoi diagram from the dots.
Here's a good JavaScript library that I've used in the past for this: https://github.com/gorhill/Javascript-Voronoi
Color each cell based on sampling the colors.
Do you just pick the color at the dot? Sample all the colors within the cell and average them? Weight the average by the center of the cell? Each would produce different but possibly interesting results.
If the result needs to be triangles and not polygons, then instead of a Voronoi diagram create a Delaunay triangulation. GitHub has 15 JavaScript libraries for this, but I've not tried any to be able to specifically recommend one.
Ok, it's a bit indirect, but here goes.....!
This is a plugin for SVG that turns images into pointilized art: https://github.com/gsmith85/SeuratJS/blob/master/seurat.js
Here's the interesting part. Under the hood, it uses canvas to do the processing!
The examples show images composed of "dots" and "squares".
Perhaps you can modify the code to produce your triangles -- even just cut the squares diagonally to create triangles.
I'm working on displaying interactive map in html5.
I have created zones of map as array of numbers (representing coordinates)
like:
Zone1=[{x=3,y=4}, {x=8,y=5}]
and I have also created a map which is an array of zones
like:
map=[zone1, zone2....]
I have no problem drawing the zones in the canvas using context.lineTo() function, the same way I'm able to capture the mouse position on click and determine on which zone a user has clicked using point in polygon algorithm.
My difficulty arises when I want to fill color of of the zone when it is clicked at.
Anybody have ideas?
PS:
The shapes I made are irregular
I'm not in to using JavaScript libraries like jQuery or anything else
HTML5 Canvas does not know of notion of shapes of objects which you can manipulate. You have two options in your situation:
Use SVG to draw what you need (check examples on W3Schools)
Use some JS canvas library which adds abstraction to provide notion of shapes (check out EasleJS)
Write your own abstraction over canvas to provide shapes
You should know however, that even with such libraries, "shapes" are getting fully redrawn. Possibly, entire scenes are redrawn. SVG alleviates this, it's performance decreases as number of shapes/objects grows.
You can't. The shapes you created are not variables or referenceable in any way once they are added to the canvas. You could redraw the shape with a new colour over the old one, but I think your best bet would be to use a library to handle this for you.
Since I have used it myself, my own suggestion would be Kinetic.js, but there are a plenty to chose from.