I'm trying to make a game that has three scenes, each with different functions, but they all need the same three.js objects in them.
My question is: in terms of rendering speed, is it better to use three cameras, and just reposition the blocks when I change from one scene to another, or use three scenes which all hold the same types of objects but they are actually different objects and they don't have to be moved?
If you don't understand that, imagine this: I have a scene full of three.js objects shaped like letters, and they spell a paragraph. I want a new paragraph with the same letters, but is it better for the rendering speed to move all the letters around, or flip to a scene that has already been loaded with the same letters, but in the shape of the new paragraph?
I am totally open to alternative ways of accomplishing this task, as long as they are only with JavaScript.
Thank you very much!
I believe you're referencing the wrong problem. Two scenes with 50 objects each will render in (relatively) the same amount of time on the GPU. The "rendering speed" you're considering includes scene processing to reposition your letters. So the real problem you're facing is finding a minimal number of operations in order to position your letters from one scene to the next.
If we make a few assumptions:
You don't care about memory
You don't care about the time it takes to initially set up the scenes
You don't plan on rearranging the letters further than the given scenes
Then the fastest you can go is to have separate copies of all the objects. You wouldn't need to perform any repositioning, so you would jump directly to the render step.
Related
I want to detect when the user in my ThreeJS is looking at one of the animation models. Currently I am using this code:
ourRaycaster.setFromCamera(g_Mouse, g_ThreeJsCamera);
g_AryObjectsLookedAt = ourRaycaster.intersectObjects(g_ThreeJsScene.children, true);
The problem is that although the raycaster will detect any collisions between the current camera line of sight and a child object in the animation models group, it will not intersect with any empty spaces in the group. Although this makes sense from a technical point of view, it doesn't match what we do as humans when looking at an object. When we look at an object, we mentally draw a shape around the overall object and if we are looking inside that pseudo-shape we feel that we are looking at the object.
For example, if you have a big head and wide shoulders, but a very thin neck, if I focus momentarily at the space alongside your neck I still feel that I am looking at "you".
What I need is a raycaster that when it passes through an animation model's group of children, does approximately the same thing.
Here is a concrete example. I am using the sample robot animation model from the ThreeJS examples. It has a really big head. When I do the raycast from this distance, no part of the robot is in the intersecting objects list:
But if I get right up "in its face", it is detected (it's "face" is detected):
This is technically accurate but I need something a lot more forgiving so I can create proper logic for interactions between animation model and the user. How can I do this without having to iterate manually with every animation model in the virtual scene?
I have a library that renders models of specific formats.
These formats have animations, and constantly changing things like dynamic mesh colors (aka vertex color per mesh), meshes dynamically being visible or not, and many other things, all depending on the animations.
All was fine, and then I wanted to make the rendering actually fast and switched to instanced rendering.
Since then everything became a nightmare, and no matter how many designs I try, I simply cannot get it to work properly.
The code works in such a way that every mesh in every model owns a bucket object, which holds shared buffers for many instances of this mesh.
For example, every instance needs the mentioned dynamic vertex color, so the bucket holds one buffer with enough memory for some N vertex colors.
When a new instance is added, a free index into this buffer is given to it.
Every instance writes to this index as if it's a C pointer, and at the end the whole buffer is updated with the new values.
So far so good.
When an instance is removed, there's a hole in the buffer.
What I did was to "move" the last instance to the one that got deleted, essentially the same as changing pointers.
This isn't so nice, but it's fast.
Then the issues came in two ways - textures, and cameras.
Let's say I want one instance to use a different texture, this is impossible with instanced rendering directly.
So now it's split. A mesh holds an array of mesh "views", and each of these views hold the buckets.
A view can define its own texture overrides, and so every view is an additional N render calls.
This is so-so, but I couldn't think of anything else.
And finally, the actual problem I am now totally stuck on - cameras.
I want to be able to define the camera each instance uses.
It looked as if I could just add a matrix as an attribute, but that doesn't allow to change the viewport, which is kind of important.
I cannot for the life of me think of any way that actually works, that allows all of the above. Switching cameras, switching textures, and instanced rendering.
To make matters worse, I also can't think of any nice way to cull instances.
The only real way I can see to really support everything, is to dynamically keep re-creating buffers for each render call, and copying tons of data each frame, both on the CPU side and GPU side.
For example - sort all of the instances per-camera into different JS arrays (culling can be done here), create a typed array for each one, copy all of the instance data from all of the instances into this array, bind it to a buffer, and redo this for every group, for every "view", for every frame.
This is basically OOP rendering, switched to instanced rendering right at the final step, and I doubt it will be any kind of fast...
tl;dr instanced rendering is a hell to actually use. Are there any known proper techniques to use it with different textures/cameras/whatnot?
say you have a simple raycaster. When you mouse over, the model will light up.
However, in this implementation, this model would be broken into parts, which would be parts of the model but still their own separate "models". For example, say your model happened to be a car. When you mouse over the hood, it lights up. When you mouse over the door, it lights up, etc.
I did not find any such instance of what I was speaking of in the threejs examples.
is there a way to break a full .obj model into several, individual but connected, models in three.js?
I have been working with threejs for quite a time but I do not think that this is possible, at least not with threejs itself (maybe WebGL has some tools which would help you to achieve this, but still, if it is a complex model such as car, the result would still be pretty terrible). But there are several workarounds.
In treejs, create your model from multiple smaller ones. For simple objects this is possible (example: instead of sphere, create two hemispheres and place them next to each other).
Instead of using raycaster, use point light. This will cast the light on the area not the object, therefore if you target one big object, you will end up lighting up only the part of the object, based on the intensity and distance of the point light.
If you have complex model such as car. Load it with some 3D modelling program (Blender, ...) and break it into smaller ones and then save each of them separatelly. And in your threejs code, load each one separatelly and position them the way it will look like single object. (I guess this is the only reasonable way in this case)
I'm trying to make a little platform game with pure HTML5 and JavaScript. No frameworks.
So in order to make my character jump on top of enemies and floors/walls etc., it needs some proper collision detection algorithms.
Since I'm not usually into doing this. I really have no clue on how to approach the problem.
Should I do a re-check in every frame (it runs in 30 FPS) for all obstacles in the Canvas and see if it collides with my player, or is there a better and faster way to do so?
I even thought of making dynamic maps. So the width, height, x- and y coordinates of the obstacle are stored in an object. Would that make it faster to check if it's colliding with the player?
1. Should I re-check in every frame (it runs on 30 FPS)?
Who says it runs in 30 FPS? I found no such thing in the HTML5 specification. Closest you'll get to have anything to say about the framerate at all is to programmatically call setInterval or the newish, more preferred, requestAnimationFrame function.
However, back to the story. You should always look for collisions as much as you can. Usually, writing games on other platforms where one have a greater ability to measure CPU load, this could be one of those things you might find favorable to scale back some if the CPU has a hard time to follow suit. In JavaScript though, you're out of luck trying to implement advanced solutions like this one.
I don't think there's a shortcut here. The computer has no way of knowing what collided, how, when- and where, if you don't make that computation yourself. And yes, this is usually, if not at all times even, done just before each new frame is painted.
2. A dynamic map?
If by "map" you mean an array-like object or multidimensional array that maps coordinates to objects, then the short answer has to be no. But please do have an array of all objects on the scene. The width, height and coordinates of the object should be stored in variables in the object. Leaking these things would quickly become a burden; rendering the code complex and introduce bugs (please see separation of concerns and cohesion).
Do note that I just said "array of all objects on the scene" =) There is a subtle but most important point in this quote:
Whenever you walk through objects to determine their position and whether they have collided with someone or not. Also have a look at your viewport boundaries and determine whether the object are still "on the scene" or not. For instance, if you have a space craft simulator of some kind and a star just passed the player's viewport from one side to the other and then of the screen, and there is no way for the star to return and become visible again, then there is no reason for the star to be left behind in the system any more. He should be deleted and removed. He should definitely not be stored in an array and become part of a future collision detection with the player's avatar! Such things could dramatically slow down your game.
Bonus: Collision quick tips
Divide the screen into parts. There is no reason for you to look for a collision between two objects if one of them are on left side of the screen, and the other one is on the right side. You could split up the screen into more logical units than just left and right too.
Always strive to have a cheap computation made first. We kind of already did that in the last tip. But even if you now know that two objects just might be in collision with each other, draw two logical squares around your objects. For instance, say you have two 2D airplanes, then there is no reason for you to first look if some part of their wings collide. Draw a square around each airplane, effectively capturing their largest width and their largest height. If these two squares do not overlap, then just like in the last tip, you know they cannot be in collision with each other. But, if your first-phase cheap computation hinted that they might be in collision, pass those two airplanes to another more expensive computation to really look into the matter a bit more.
I am still working on something i wanted to make lots of divs and make them act on physics. I will share somethings that weren't obvious to me at first.
Detect collisions in data first. I was reading the x and y of boxes on screen then checking against other divs. After a week it occurred to me how stupid this was. I mean first i would assign a new value to div, then read it from div. Accessing divs is expensive. Think dom as a rendering stage.
Use webworkers if possible easy.
Use canvas if possible.
And if possible make elements carry a list of elements they should be checked against for collision.(this would be helpful in only certain cases).
I learned that interactive collisions are way more expensive. Because you have to check for changes in environment while in normal interaction you simulate what is going to happen in future, and therefore your animation would be more fluid and more cpu available.
i made something very very early stage just for fun: http://www.lastnoob.com/
I am creating a platform game in JavaScript using canvas which is entirely tile-based. What is the best method of storing the blocks of items in the game (walls, floors, items)? The thing is every tile can be destroyed or created.
Currently I have a 2D array so I am able to quickly check if an item is at a specific X & Y position. The problem with this is when the user moves and the map needs to scroll, I need to reassign every block. And what happens when the item is at x = 0? I can't use negative indexes.
I would rather the scrolling analogue as aposed to a tile at a time. Also I plan on randomly generating maps as the user moves and if it hasn't previously been generated. So once something is generated, it should stay that way forever.
Another point I should mention is that it will also be multiplayer. So chunking the screen is a great idea until the cached data becomes dirty and needs to get the latest from the database. Gah I'm so new to all this; seems impossible, any help is greatly appreciated.
Since you have infinite levels I'd suggest a structure similar to what you already have, with a slight tweak.
Instead of storing everything in one big array, and moving stuff around inside that array every time the user moves (ouch) instead store 9 chunks of map (each one a size such that it's roughly twice the size of the screen), when the user approaches the edge of the current chunk, dispose the chunks which are offscreen, move all the chunks left, and load new ones into the gap.
Hopefully that was clear, but just in case, here's a diagram:
The lettered squares are the chunks of map, and the red square is the viewport (I drew it slightly too large, remember the viewport is smaller than the black squares). As the viewport moves right, you unload chunks A B and C, move all the others left, and load new data into the right most ones. Since a chunk is twice the width of the screen, you have the time it takes the user to cross the screen to generate/load the level into those chunks. If the user moves around the world fast, you can have a 4x4 set of chunks for extra buffering.
To address the returning to previous chunks of map. there are a couple of ways to do that:
Write out the chunks to hard disk when they're no longer in use (or whatever the equivalent would be in javascript)
Expand your chunk set infinitely in memory. Rather than an array of chunks have an assosciative array which takes x/y position of the chunk, and returns the chunk (or null, which indicates that the user has never been here before and you need to generate it).
Procedurally generate your levels, that's complex but means that once a chunk goes off screen you can just dispose it, and be confident you can regenerate exactly the same chunk again later
There are obviously lots of ways to do this.
If they levels aren't too large, you can keep your original design of a 2d array and use a variable to store the current x/y scroll position. This means that you store all of the map information in memory at all times, and only access the parts you need to display on the screen.
When painting you work out which tile is visible at current scroll position x/y, how many tiles fit on the screen with the current screen width and only paint the ones you need.
If you scroll whole tiles at a time, then its pretty easy. If its a more analog scrolling, you will need some finer grain control of where in the tile you start drawing, or you can cheat and draw the whole set of tiles to an in memory bitmap and then blit that to the drawing canvas with an negative offset.
I once defined this very thing in XML and JSON... I think the JSON serialization would be alot faster and more efficient (not to mention easy) to work with in JavaScript, especially since JSON lends itself so well to variable length lists as you would require for "N" levels in each game.
Using a standard format would also make it more reusable and (ideally) encourage more collaboration (if that's what you're looking for). You can check out my first attempt to create a Video Game schema for level structure.
As fileoffset says, there are many ways to do it, but I highly suggest keeping your level data (i.e. concepts) separate from your rendering (i.e. objects in motion, paths, speed, timing, x/y/z co-ordinate positioning, etc...).
Again, as the article said that area is the most quickly changing, and there's no telling if WebGL, SMIL+SVG, Canvas+Javascript, good ol' Flash/Actionscript or something else will be the best route for you, depending on your needs and the type of game you are developing.