I would like to make a blobby like attached link effect in my web.
[link]https://codepen.io/JuanFuentes/pen/mNVaBX
but I find the blob is not responsive when I try to scale the webpage.
the blob still in the original position.
can any friend help me?
It's not enough to simply resize the canvas! You should reposition your ball's vertices as well according to the new canvas size / height.
Here is updated code: https://codepen.io/gbnikolov/pen/xxZBVMX?editors=0010
Pay attention to line 52:
this.resize = function (deltaX, deltaY) {
// deltaX and deltaY are the difference between old canvas width / height and new ones
for (let i = 0; i < this.vertex.length; i++) {
const vertex = this.vertex[i]
vertex.x += deltaX
vertex.y += deltaY
}
}
EDIT: as you see, the blob takes some time to animate its vertices to the new position on resize. I am not sure wether this is good or bad for your purposes. A more advanced solution would be to stop the requestAnimationFrame() update loop, update the vertices to their new positions and only then start your loop again (so the blob "snaps" to its right position immediately).
EDIT 2: You don't need to return this from constructor functions when calling them with the new keyword, JS does this for you automatically:
var Blob = function(args) {
// ...
return this // <- thats not needed!
}
Read more about the this property and it's relationship to classes in JS here
Related
I'm trying to detect when an object in Three.js is partially and fully occluded (hidden behind) another object.
My current simple solution casts a single ray to the the center of the object:
function getScreenPos(object) {
var pos = object.position.clone();
camera.updateMatrixWorld();
pos.project(camera);
return new THREE.Vector2(pos.x, pos.y);
}
function isOccluded(object) {
raycaster.setFromCamera(getScreenPos(object), camera);
var intersects = raycaster.intersectObjects(scene.children);
if (intersects[0] && intersects[0].object === object) {
return false;
} else {
return true;
}
}
However it doesn't account for the object's dimensions (width, height, depth).
Not occluded (because center of object is not behind)
Occluded (because center of object is behind)
View working demo:
https://jsfiddle.net/kmturley/nb9f5gho/57/
Currently thinking I could calculate the object box size, and cast Rays for each corner of the box. But this might still be a little too simple:
var box = new THREE.Box3().setFromObject(object);
var size = box.getSize();
I would like to find a more robust approach which could give partially occluded and fully occluded booleans values or maybe even percentage occluded?
Search Stack Overflow and the Three.js examples for "GPU picking." The concept can be broken down into three basic steps:
Change the material of each shape to a unique flat (MeshBasicMaterial) color.
Render the scene with the unique materials.
Read the pixels of the rendered frame to collect color information.
Your scenario allows you a few caveats.
Give only the shape you're testing a unique color--everything else can be black.
You don't need to render the full scene to test one shape. You could adjust your viewport to render only the area surrounding the shape in question.
Because you only gave a color only to your test part, the rest of the data should be zeroes, making finding pixels matching your unique color much easier.
Now that you have the pixel data, you can determine the following:
If NO pixels matchthe unique color, then the shape is fully occluded.
If SOME pixels match the unique color, then the shape is at least partially visible.
The second bullet says that the shape is "at least partially" visible. This is because you can't test for full visibility with the information you currently have.
What I would do (and someone else might have a better solution) is render the same viewport a second time, but only have the test shape visible, which is the equivalent of the part being fully visible. With this information in hand, compare the pixels against the first render. If both have the same number (perhaps within a tolerance) of pixels of the unique color, then you can say the part is fully visible/not occluded.
I managed to get a working version for WebGL1 based on TheJim01's answer!
First create a second simpler scene to use for calculations:
pickingScene = new THREE.Scene();
pickingTextureOcclusion = new THREE.WebGLRenderTarget(window.innerWidth / 2, window.innerHeight / 2);
pickingMaterial = new THREE.MeshBasicMaterial({ vertexColors: THREE.VertexColors });
pickingScene.add(new THREE.Mesh(BufferGeometryUtils.mergeBufferGeometries([
createBuffer(geometry, mesh),
createBuffer(geometry2, mesh2)
]), pickingMaterial));
Recreate your objects as Buffer Geometry (faster for performance):
function createBuffer(geometry, mesh) {
var buffer = new THREE.SphereBufferGeometry(geometry.parameters.radius, geometry.parameters.widthSegments, geometry.parameters.heightSegments);
quaternion.setFromEuler(mesh.rotation);
matrix.compose(mesh.position, quaternion, mesh.scale);
buffer.applyMatrix4(matrix);
applyVertexColors(buffer, color.setHex(mesh.name));
return buffer;
}
Add a color based on the mesh.name e.g. an id 1, 2, 3, etc
function applyVertexColors(geometry, color) {
var position = geometry.attributes.position;
var colors = [];
for (var i = 0; i < position.count; i ++) {
colors.push(color.r, color.g, color.b);
}
geometry.setAttribute('color', new THREE.Float32BufferAttribute(colors, 3));
}
Then during the render loop check the second scene for that texture, and match pixel data to the mesh name:
function isOccludedBuffer(object) {
renderer.setRenderTarget(pickingTextureOcclusion);
renderer.render(pickingScene, camera);
var pixelBuffer = new Uint8Array(window.innerWidth * window.innerHeight);
renderer.readRenderTargetPixels(pickingTextureOcclusion, 0, 0, window.innerWidth / 2, window.innerHeight / 2, pixelBuffer);
renderer.setRenderTarget(null);
return !pixelBuffer.includes(object.name);
}
You can view the WebGL1 working demo here:
https://jsfiddle.net/kmturley/nb9f5gho/62/
One caveat to note with this approach is that your picking scene needs to stay up-to-date with changes in your main scene. So if your objects move position/rotation etc, they need to be updated in the picking scene too. In my example the camera is moving, not the objects so it doesn't need updating.
For WebGL2 we will have a better solution:
https://tsherif.github.io/webgl2examples/occlusion.html
But this is not supported in all browsers yet:
https://www.caniuse.com/#search=webgl
I create a rectangle and a square object with an OOP method.
Next, i would to move my rectangle in an another position. I use the function moveTo.
My rectangle move but from its creation position and not from an absolute position.
What do I have to specify to move this rectangle to x=100,y=100 ?
var container = new PIXI.Container();
var renderer = PIXI.autoDetectRenderer(320, 480,{backgroundColor : 0x1099bb});
document.body.appendChild(renderer.view);
requestAnimationFrame( animate );
var blue= 0x00c3ff
var red = 0xFF0040
var SHAPE={}
SHAPE.geom = function(posx,posy,width,height,color) {
PIXI.Graphics.call(this,posx,posy,width,height,color);
this.beginFill(color);
this.drawRect(posx,posy,width,height)
}
SHAPE.geom.prototype = Object.create(PIXI.Graphics.prototype);
SHAPE.geom.prototype.constructor = SHAPE.geom;
var square=new SHAPE.geom(10,10,10,10,red)
var rect=new SHAPE.geom(200,400,80,30,blue)
rect.moveTo(100,100)
container.addChild(square,rect);
function animate() {
requestAnimationFrame( animate );
renderer.render(container);
}
Check the PIXI docs for moveTo: https://pixijs.github.io/docs/PIXI.Graphics.html#moveTo
It moves the DRAWING position to some coordinates, not the actual already existing object. So if you would use moveTo and after that draw with the graphics object, it should draw starting from that position. At least in theory (I have not used moveTo ever).
You should use the objects .x, .y or .position properties for setting where you want the display object to reside inside parent container. So something like:
rect.x = 100;
rect.y = 100;
or
rect.position = new PIXI.Point(100, 100);
If there are any issues, please let me know and I will make a plunkr for you to give you a working example. I do not have the time for that now unfortunately.
Also in general it is a good idea to make simple examples like this in plunkr, jsfiddle or something equivalent. Then the person answering you can easily modify your code and show you a surely working example. It will be better for both.
I'm working on a Paper.js application that puts a raster (an image) in the view. Then it zooms to fit the image so that all of it is visible at one time. It's mostly working, but the image ends up offset, like this:
When it should look more like this:
Here's the code that makes the view, adds the image, and makes the call to zoom to fit:
// Set up HTMLImage
var image = new Image(this.props.image.width, this.props.image.height);
image.src = 'data:image/png;base64,' + this.props.image.imageData;
//var canvas = React.findDOMNode(this.refs.canvas); // React 0.13 +
var canvas = this.refs.canvas.getDOMNode();
// Scale width based on scaled height; canvas height has been set to the height of the document (this.props.height)
var scalingFactor = canvas.height/image.height;
canvas.width = image.width * scalingFactor;
// Initialize Paper.js on the canvas
paper.setup(canvas);
// Add image to Paper.js canvas
var raster = new paper.Raster(image, new paper.Point(0,0));
// Fit image to page so whole thing is displayed
var delta = scalingFactor < 1 ? -1 : 1; // Arbitrary delta (for determining zoom in/out) based on scaling factor
var returnedValues = panAndZoom.changeZoom(paper.view.zoom, delta, paper.view.center, paper.view.center, scalingFactor);
paper.view.zoom = returnedValues[0];
And here is the panAndZoom.changeZoom method:
SimplePanAndZoom.prototype.changeZoom = function(oldZoom, delta, centerPoint, offsetPoint, zoomFactor) {
var newZoom = oldZoom;
if (delta < 0) {
newZoom = oldZoom * zoomFactor;
}
if (delta > 0) {
newZoom = oldZoom / zoomFactor;
}
var a = null;
if(!centerPoint.equals(offsetPoint)) {
var scalingFactor = oldZoom / newZoom;
var difference = offsetPoint.subtract(centerPoint);
a = offsetPoint.subtract(difference.multiply(scalingFactor)).subtract(centerPoint);
}
return [newZoom, a];
};
Any idea why it zooms to fit but loses the centering?
TL;DR
Use either, but not both:
After the zoom:
paper.view.setCenter(0,0);
When pasting the image:
var raster = new paper.Raster(image, new paper.Point(canvas.width/2, canvas.height/2));
The long answer
As a disclaimer, I must point out I have no knowledge of paper.js and their documentation looks seemingly terrible. This answer is born from fiddling.
I more or less replicated your code and after some tinkering, I managed to fix the issue by using this:
paper.view.zoom = returnedValues[0];
paper.view.setCenter(0,0);
If you wonder why I'm not pointing at the documentation for setCenter, it's because I couldn't find any. I discovered that method by inspecting paper.view.center.
Although I wasn't satisfied as I could not understand why this would work. As such, I kept looking at your code and noticed this:
// Add image to Paper.js canvas
var raster = new paper.Raster(image, new paper.Point(0,0));
The documentation for raster tells us the following:
Raster
Creates a new raster item from the passed argument, and places it in the active layer. object can either be a DOM Image, a Canvas, or a string describing the URL to load the image from, or the ID of a DOM element to get the image from (either a DOM Image or a Canvas).
Parameters:
source: HTMLImageElement / HTMLCanvasElement / String — the source of the raster — optional
position: Point — the center position at which the raster item is placed — optional
My understanding is that the image is pasted on the canvas with its center at position 0,0, also known as the top left corner. Then the content of the canvas is resized, but based on its own center position. This pulls the image closer to the center of the canvas, but there is still a discrepancy.
Setting the center to 0,0 simply synchronizes the center points of both the image and the canvas.
There is also an alternative way, which is to paste the image at the current center of the canvas, which seems more proper:
// Add image to Paper.js canvas
var raster = new paper.Raster(image, new paper.Point(canvas.width/2, canvas.height/2));
Here is the rudimentary JSFiddle to experiment.
I've been following this tutorial (http://creativejs.com/tutorials/three-js-part-1-make-a-star-field/) and everything is going fine, but I'd like to know how I can modify the script so then there's some form of callback when the particle reaches the end. I've modified the code in the tutorial so it's reversed the way the particles are moving.
What I'm wanting to try and create is a load of particles coming together to form a square, is it possible from using the code in the tutorial as a start and build on top of that or should I look elsewhere and start over?
Thanks in advance.
Wouldn't it be easy if you create the same amount of particles as the model's amount of vertices, and move the particles towards the vertices' position?
var p = emitter.geometry.vertices; // the vertices (particles) of your particle emitter
var m = model.geometry.vertices; // the vertices of the target model
for(var i in p) {
p[i].x = (p[i].x + m[i].x) / 2;
p[i].y = (p[i].y + m[i].y) / 2;
p[i].z = (p[i].z + m[i].z) / 2;
}
Recently I have been furthering my knowledge of Javascript, by combining the Processing.js and Box2D.js libraries to make some neat browser-based simulations.
With my current idea, I am trying to allow the user to click and drag a shape, and then let it drop once the mouse is released. So far I have been able to figure out how to use the b2MouseJoint object to manipulate a body with mouseX/mouseY coordinates, but it doesn't quite work to the full extent.
All that happens when a shape is clicked is it gets pinned and revolves around what ever mouseX/mouseY point was current at the time of the click.
void mousePressed(){
for(int i = 0; i < circles.size(); i++){
//Get body objects from ArrayList
var obj = circles[i];
// Retrieve shapes from body
var innerShape = obj.GetShapeList();
var rad = innerShape.m_radius;
// Create mouseJoint and add attributes
var mouseJoint = new b2MouseJointDef();
mouseJoint.body1 = world.GetGroundBody();
// Detect body
if(dist(mouseX,mouseY,obj.m_position.x,obj.m_position.y) < rad){
Vec2 p = new b2Vec2(mouseX,mouseY);
mouseJoint.body2 = obj;
mouseJoint.target = p;
mouseJoint.maxForce = 10000.0f * obj.GetMass();
mouseJoint.collideConnected = true;
mouseJoint.dampingRatio = 0;
mouseJoint.frequencyHz = 100;
world.CreateJoint(mouseJoint);
}
}
}
So basically my question is, how can this be written so the Body/Shape follows my mouse's coordinates while I have the mouse held down, instead of just pinning the shape in place.
Cheers
Basically all you have to do is add the code you're using now to set the coordinate in mousePressed to a mouseDragged() method, as well, as this is the event method that gets called while the mouse is moved with one or more buttons depressed:
void mouseDragged()
{
// update obj with mouseX and mouseY in this method
}
You may also want to do a bit more administration by setting up "initial click" mark variables during mousePressed(), updating a set of "offset" variables during mouseDragged() and committing the offsets to the marks in mouseReleased(), so that you can do things like snapping back to the original position, etc.