I am currently working on a small project using the new Babylon.js framework. One of the issues I have run into is that I basically have two meshes. One of the meshes is supposed to be the background, and the other is supposed to follow the cursor to mark where on the other mesh you are targeting. The problem is that when I move the targeting mesh to the position of the cursor, it blocks the background mesh when I use scene.pick, resulting in the other mesh having its position set on its self.
Is there any way to ignore the targeting mesh when using scene.pick so that I only pick the background mesh or is there some other method I could use? If not, what would be the steps to implement this sort of feature to essentially raycast only through certain meshes?
If you need code samples or any other forms of description, let me know. Thanks!
Ok, it's easy.
So, we have two meshes. One is called "ground", the second "cursor". If you want to pick only on the ground you have two solutions :
First:
var ground = new BABYLON.Mesh("ground",scene);
ground.isPickable = true ;
var cursor = new BABYLON.Mesh("cursor", scene);
cursor.isPickable = false;
...
var p = scene.pick(event.clientX, event.clientY); // it return only "isPickable" meshes
...
Second:
var ground = new BABYLON.Mesh("ground",scene);
var cursor = new BABYLON.Mesh("cursor", scene);
...
var p = scene.pick(event.clientX, event.clientY, function(mesh) {
return mesh.name == "ground"; // so only ground will be pickable
});
...
regards.
Related
I would like to build a parallax effect from a 2D image using a depth map, similar to this, or this but using three.js.
Question is, where should I start with? Using just a PlaneGeometry with a MeshStandardMaterial renders my 2D image without parallax occlusion. Once I add my depth map as displacementMap property I can see some sort of displacement, but it is very low-res. (Maybe, since displacement maps are not meant to be used for this?)
My first attempt
import * as THREE from "three";
import image from "./Resources/Images/image.jpg";
import depth from "./Resources/Images/depth.jpg";
[...]
const geometry = new THREE.PlaneGeometry(200, 200, 10, 10);
const material = new THREE.MeshStandardMaterial();
const spriteMap = new THREE.TextureLoader().load(image);
const depthMap = new THREE.TextureLoader().load(depth);
material.map = spriteMap;
material.displacementMap = depthMap;
material.displacementScale = 20;
const plane = new THREE.Mesh(geometry, material);
Or should I use a Sprite object, which face always points to the camera? But how to apply the depth map to it then?
I've set up a codesandbox with what I've got so far. It also contains event listener for mouse movement and rotates the camera on movement as it is work in progress.
Update 1
So I figured out, that I seem to need a custom ShaderMaterial for this. After looking at pixijs's implementation I've found out, that it is based on a custom shader.
Since I have access to the source, all I need to do is rewrite it to be compatible with threejs. But the big question is: HOW
Would be awesome if someone could point me into the right direction, thanks!
I'm new to PixiJS and I'm trying something simple like a painting application.
I'm having difficulty trying to capture a collection of shapes as a single grouping. I'm not interested in working code for this as I'd like to figure that out on my own; I'm simply interested in knowing whether I'm on the right track or if I need to explore some other PixiJS concepts to get what I need.
I have one canvas in which I can drag shapes such as rectangles, ellipse, and lines. These "strokes" are being stored as individual Graphics objects, for instance:
var shape = new PIXI.Graphics();
shape.position.set(...);
...
shape.lineStyle(...)
.beginFill(...)
.drawRect(...)
.endFill();
...
stage.addChild(shape);
...
renderer.render(stage);
I'm also holding onto these shapes in an array:
shapes.push(shape);
Now that I have these displayed as well as have the order of the strokes available, I'd like to be able to capture them somehow. Imagine maybe taking the drawing and saving it, or perhaps using it as a thumbnail in a gallery, or simply just storing it on the back-end in a database, preferably keeping all the raw strokes so that they can be scaled up or down as desired.
For now, I'm simply trying to take this collection of strokes and display them again by holding them, clearing the graphics from my canvas, and then plopping down what I have held.
Looking at this example, I've been able to get a texture that I can reliably reproduce wherever I click with the mouse:
http://jsfiddle.net/gzh14bcn/
This means I've been able to take the first part that creates the texture object, and I tweaked the second part to create and display the sprites when I click the mouse.
When I try to replace this example code with my own code to create the texture itself, I can't get that part to work.
So this example snippet works fine when I try to create a sprite from it:
var texture = new PIXI.RenderTexture(renderer, 16, 16);
var graphics = new PIXI.Graphics();
graphics.beginFill(0x44FFFF);
graphics.drawCircle(8, 8, 8);
graphics.endFill();
texture.render(graphics);
FYI to create sprites:
var sprite = new PIXI.Sprite(texture);
sprite.position.set(xPos, yPos);
stage.addChild(sprite);
Since I have my shapes in the shapes array or on the stage, what is the preferred way I proceed to capture this as a single grouping from which I can create one or more sprites?
So basicaly you've got how to make some PIXI.Graphics shape
var pixiRect = new PIXI.Graphics();
pixiRect.lineStyle(..);
pixiRect.beginFill(..);
pixiRect.drawRect(..);
pixiRect.endFill(..);
(You can draw as many rects/circles/shapes as you want into one PIXI.Graphics)
But to convert it to texture you must tell renderer to create it
var texture = renderer.generateTexture(pixiRect);
Then you can easily create PIXI.Sprite from this texture
var spr = new PIXI.Sprite(texture);
And the last thing is to add it to your stage or array, but you can also make some empty PIXI.Container and then addChild to that and you've got your array
option - add sprite (created from graphics) to stage
stage.addChild(spr);
option - push it to your array
shapes.push(spr);
option - if you have var shapes = new PIXI.Container(); you can make a container for your sprites
shapes.addChild(spr);
Working example : https://jsfiddle.net/co7Lrbq1/3/
EDIT:
to position your canvas above you have to addChild it later, it means first addChild has zIndex = 0 and every addChild adds a layer on top of last
I figured it out. My stage is a container:
var stage = new PIXI.Container();
var canvas = new PIXI.Graphics();
canvas.lineStyle(4, 0xffffff, 1);
canvas.beginFill(0xffffff);
canvas.drawRect(canvasStartX, canvasStartY, 500, 600);
canvas.endFill();
stage.addChild(canvas);
I changed this to the following:
var canvas = new PIXI.Container();
var canvasRect = new PIXI.Graphics();
canvasRect.lineStyle(4, 0xffffff, 1);
canvasRect.beginFill(0xffffff);
canvasRect.drawRect(canvasStartX, canvasStartY, 500, 600);
canvasRect.endFill();
canvas.addChild(canvasRect);
stage.addChild(canvas);
Then, I replaced stage with canvas where appropriate and canvas with canvasRect where appropriate.
Finally, I got my texture with:
var texture = canvas.generateTexture(renderer);
At the moment, this grabbed the entire width/height of the stage, but I think I just need to tweak a bit on how I create my canvas above and I should be fine.
I have a three.js project where I'm adding in 100 meshes, that are divided into 10 scenes:
//add 100 meshes to 10 scenes based on an array containing 100 elements
for (var i = 0; i < data.length; i++) {
mesh = new THREE.Mesh(geometry, material);
//random positions so they don't spawn on same spot
mesh.position.x = THREE.Math.randInt(-500, 500);
mesh.position.z = THREE.Math.randInt(-500, 500);
They're added in by a loop, and all these meshes are assigned to 10 scenes:
// Assign 10 meshes per scene.
var sceneIndex = Math.floor(i/10);
scenes[sceneIndex].add(mesh);
I also wrote a functionality where I can rotate a mesh around the center of the scene.
But I don't know how to apply the rotation functionality to all meshes while still keeping them divided into their corresponding scenes. This probably sounds way too vague so I have a fiddle that holds all of the relevant code.
If you comment these two lines back in you'll see that the meshes all move to scenes[0], and they all rotate fine the way I wanted, but I still need them divided in their individual scenes.
spinningRig.add(mesh);
scenes[0].add(spinningRig);
How is the code supposed to look like? Waht is the logic to it?
The logic is fairly simple. The simplest format would be to have a separate spinningRig for each scene -- essentially a grouping of the meshes for each scene.
When you create each scene, you'll also create a spinningRig and add + assign it to that scene:
// Setup 10 scenes
for(var i=0;i<10;i++) {
scenes.push(new THREE.Scene());
// Add the spinningRig to the scene
var spinningRig = new THREE.Object3D();
scenes[i].add(spinningRig);
// Track the spinningRig on the scene, for convenience.
scenes[i].userData.spinningRig = spinningRig;
}
Then instead of adding the meshes directly to the scene, add them to the spinningRig for the scene:
var sceneIndex = Math.floor(i/10);
scenes[sceneIndex].userData.spinningRig.add(mesh);
And finally, rotate the spinningRig assigned to the currentScene:
currentScene.userData.spinningRig.rotation.y -= 0.025;
See jsFiddle: https://jsfiddle.net/712777ee/4/
I'm trying to raycast to TextGeometry's boundingBox. Currently, raycasting works for textGeometry when click is on the letters not around or inbetween letters. If the click is inbetween the text letters/aphabets, no object is intersected with intersectObjects(). I need the raycast to intersect the textGeo object when click is inbetween the letters as well.
I'm defining TextGeometry as:
var textGeo = new THREE.TextGeometry( text, {
size: size,
height: 1,
font: 'helvetica'
});
textGeo.computeBoundingBox();
var textMaterial = new THREE.MeshBasicMaterial({ color: fontColor });
var textMesh = new THREE.Mesh(textGeo, textMaterial);
After searching for solutions, going with the boundingBox seemed the best approach. Please advice or point to how this can be achieved. Any ideas or tips on how to do this? Or if there is any currently available approach.
How would I make the raycast intersect the bounding box?
I found a solution in the Three.js lib itself. They have an optimization piece in the raycast function for a Mesh that looks at the BoundingBox and BoundingSphere to figure if the ray falls outside to skip checking for intersection. I flipped it around for my case:
var inverseMatrix = new THREE.Matrix4(), ray = new THREE.Ray();
//for example textGeo is the textGeometry
inverseMatrix.getInverse(textGeo.matrixWorld);
ray.copy(raycaster.ray).applyMatrix4(inverseMatrix);
if(textGeo.geometry.boundingBox !== null){
if(ray.isIntersectionBox(textGeo.geometry.boundingBox) === true){
//intersected
}
}
Create the bounding box of your geometry and create geometry for the bbox.
Create a THREE.Object3D and add the bounding as its child (name it obbox)
Add obbox to the scene.
Now if you intersect the scene you will get the obbox object first because it will always be closer to the origin of the ray.
I have created a cube (skybox) that uses different materials for each side. There is no problem with that using MeshFaceMaterial:
var imagePrefix = "images-nissan/pano_";
var imageDirections = ["xpos", "xneg", "ypos", "yneg", "zpos", "zneg"];
var imageSuffix = ".png";
var skyGeometry = new THREE.BoxGeometry(1, 1, 1);
var materialArray = [];
for (var i = 0; i < 6; i++) {
materialArray.push(new THREE.MeshBasicMaterial({
map: THREE.ImageUtils.loadTexture(imagePrefix + imageDirections[i] + imageSuffix),
side: THREE.BackSide
}));
}
var skyMaterial = new THREE.MeshFaceMaterial(materialArray);
var skyBox = new THREE.Mesh(skyGeometry, skyMaterial);
skyBox.name = "interiorMesh";
scene.add(skyBox);
However, now I would like to add a material to one of the faces of the cube and combine the materials on this face of the cube.
So basically I would have one material on 5 faces and 2 materials on 1 face of the cube - I want to overlay that 'original' texture with another transparent png so it covers only a specific part of the original image. Both images have the same dimensions, only the new one is partially transparent. Is it even possible to do with CubeGeometry? Or do I need to do it with planes? Any help greatly appreciated!
You can for sure change material of one of faces. You cannot use two materials for one face though.
I would recommend creating additional texture as combination of previous two, making it into separate material and assign it to sixth face of the cube when needed. If it is possible, merge those images beforehand in your graphic editor of choice. If you can only do it in runtime, you will either have to use canvas to merge them or shader as recommended by #beiller.
I wouldn't recommend transparent planes, transparency can be very tricky sometimes and render in a weird way.
something similar is discussed here - Multiple transparent textures on the same mesh face in Three.js