I create a rectangle and a square object with an OOP method.
Next, i would to move my rectangle in an another position. I use the function moveTo.
My rectangle move but from its creation position and not from an absolute position.
What do I have to specify to move this rectangle to x=100,y=100 ?
var container = new PIXI.Container();
var renderer = PIXI.autoDetectRenderer(320, 480,{backgroundColor : 0x1099bb});
document.body.appendChild(renderer.view);
requestAnimationFrame( animate );
var blue= 0x00c3ff
var red = 0xFF0040
var SHAPE={}
SHAPE.geom = function(posx,posy,width,height,color) {
PIXI.Graphics.call(this,posx,posy,width,height,color);
this.beginFill(color);
this.drawRect(posx,posy,width,height)
}
SHAPE.geom.prototype = Object.create(PIXI.Graphics.prototype);
SHAPE.geom.prototype.constructor = SHAPE.geom;
var square=new SHAPE.geom(10,10,10,10,red)
var rect=new SHAPE.geom(200,400,80,30,blue)
rect.moveTo(100,100)
container.addChild(square,rect);
function animate() {
requestAnimationFrame( animate );
renderer.render(container);
}
Check the PIXI docs for moveTo: https://pixijs.github.io/docs/PIXI.Graphics.html#moveTo
It moves the DRAWING position to some coordinates, not the actual already existing object. So if you would use moveTo and after that draw with the graphics object, it should draw starting from that position. At least in theory (I have not used moveTo ever).
You should use the objects .x, .y or .position properties for setting where you want the display object to reside inside parent container. So something like:
rect.x = 100;
rect.y = 100;
or
rect.position = new PIXI.Point(100, 100);
If there are any issues, please let me know and I will make a plunkr for you to give you a working example. I do not have the time for that now unfortunately.
Also in general it is a good idea to make simple examples like this in plunkr, jsfiddle or something equivalent. Then the person answering you can easily modify your code and show you a surely working example. It will be better for both.
Related
I would like to make a blobby like attached link effect in my web.
[link]https://codepen.io/JuanFuentes/pen/mNVaBX
but I find the blob is not responsive when I try to scale the webpage.
the blob still in the original position.
can any friend help me?
It's not enough to simply resize the canvas! You should reposition your ball's vertices as well according to the new canvas size / height.
Here is updated code: https://codepen.io/gbnikolov/pen/xxZBVMX?editors=0010
Pay attention to line 52:
this.resize = function (deltaX, deltaY) {
// deltaX and deltaY are the difference between old canvas width / height and new ones
for (let i = 0; i < this.vertex.length; i++) {
const vertex = this.vertex[i]
vertex.x += deltaX
vertex.y += deltaY
}
}
EDIT: as you see, the blob takes some time to animate its vertices to the new position on resize. I am not sure wether this is good or bad for your purposes. A more advanced solution would be to stop the requestAnimationFrame() update loop, update the vertices to their new positions and only then start your loop again (so the blob "snaps" to its right position immediately).
EDIT 2: You don't need to return this from constructor functions when calling them with the new keyword, JS does this for you automatically:
var Blob = function(args) {
// ...
return this // <- thats not needed!
}
Read more about the this property and it's relationship to classes in JS here
I'm trying to detect when an object in Three.js is partially and fully occluded (hidden behind) another object.
My current simple solution casts a single ray to the the center of the object:
function getScreenPos(object) {
var pos = object.position.clone();
camera.updateMatrixWorld();
pos.project(camera);
return new THREE.Vector2(pos.x, pos.y);
}
function isOccluded(object) {
raycaster.setFromCamera(getScreenPos(object), camera);
var intersects = raycaster.intersectObjects(scene.children);
if (intersects[0] && intersects[0].object === object) {
return false;
} else {
return true;
}
}
However it doesn't account for the object's dimensions (width, height, depth).
Not occluded (because center of object is not behind)
Occluded (because center of object is behind)
View working demo:
https://jsfiddle.net/kmturley/nb9f5gho/57/
Currently thinking I could calculate the object box size, and cast Rays for each corner of the box. But this might still be a little too simple:
var box = new THREE.Box3().setFromObject(object);
var size = box.getSize();
I would like to find a more robust approach which could give partially occluded and fully occluded booleans values or maybe even percentage occluded?
Search Stack Overflow and the Three.js examples for "GPU picking." The concept can be broken down into three basic steps:
Change the material of each shape to a unique flat (MeshBasicMaterial) color.
Render the scene with the unique materials.
Read the pixels of the rendered frame to collect color information.
Your scenario allows you a few caveats.
Give only the shape you're testing a unique color--everything else can be black.
You don't need to render the full scene to test one shape. You could adjust your viewport to render only the area surrounding the shape in question.
Because you only gave a color only to your test part, the rest of the data should be zeroes, making finding pixels matching your unique color much easier.
Now that you have the pixel data, you can determine the following:
If NO pixels matchthe unique color, then the shape is fully occluded.
If SOME pixels match the unique color, then the shape is at least partially visible.
The second bullet says that the shape is "at least partially" visible. This is because you can't test for full visibility with the information you currently have.
What I would do (and someone else might have a better solution) is render the same viewport a second time, but only have the test shape visible, which is the equivalent of the part being fully visible. With this information in hand, compare the pixels against the first render. If both have the same number (perhaps within a tolerance) of pixels of the unique color, then you can say the part is fully visible/not occluded.
I managed to get a working version for WebGL1 based on TheJim01's answer!
First create a second simpler scene to use for calculations:
pickingScene = new THREE.Scene();
pickingTextureOcclusion = new THREE.WebGLRenderTarget(window.innerWidth / 2, window.innerHeight / 2);
pickingMaterial = new THREE.MeshBasicMaterial({ vertexColors: THREE.VertexColors });
pickingScene.add(new THREE.Mesh(BufferGeometryUtils.mergeBufferGeometries([
createBuffer(geometry, mesh),
createBuffer(geometry2, mesh2)
]), pickingMaterial));
Recreate your objects as Buffer Geometry (faster for performance):
function createBuffer(geometry, mesh) {
var buffer = new THREE.SphereBufferGeometry(geometry.parameters.radius, geometry.parameters.widthSegments, geometry.parameters.heightSegments);
quaternion.setFromEuler(mesh.rotation);
matrix.compose(mesh.position, quaternion, mesh.scale);
buffer.applyMatrix4(matrix);
applyVertexColors(buffer, color.setHex(mesh.name));
return buffer;
}
Add a color based on the mesh.name e.g. an id 1, 2, 3, etc
function applyVertexColors(geometry, color) {
var position = geometry.attributes.position;
var colors = [];
for (var i = 0; i < position.count; i ++) {
colors.push(color.r, color.g, color.b);
}
geometry.setAttribute('color', new THREE.Float32BufferAttribute(colors, 3));
}
Then during the render loop check the second scene for that texture, and match pixel data to the mesh name:
function isOccludedBuffer(object) {
renderer.setRenderTarget(pickingTextureOcclusion);
renderer.render(pickingScene, camera);
var pixelBuffer = new Uint8Array(window.innerWidth * window.innerHeight);
renderer.readRenderTargetPixels(pickingTextureOcclusion, 0, 0, window.innerWidth / 2, window.innerHeight / 2, pixelBuffer);
renderer.setRenderTarget(null);
return !pixelBuffer.includes(object.name);
}
You can view the WebGL1 working demo here:
https://jsfiddle.net/kmturley/nb9f5gho/62/
One caveat to note with this approach is that your picking scene needs to stay up-to-date with changes in your main scene. So if your objects move position/rotation etc, they need to be updated in the picking scene too. In my example the camera is moving, not the objects so it doesn't need updating.
For WebGL2 we will have a better solution:
https://tsherif.github.io/webgl2examples/occlusion.html
But this is not supported in all browsers yet:
https://www.caniuse.com/#search=webgl
I'm trying to use snap.svg library to rotate and move my object but I am getting this weird behavior where the object gets morphed.
http://codepen.io/anon/pen/vOwRga
var s = Snap.select("#svg");
var rect1 = s.select("#rect1");
var rect2 = s.select("#rect2");
var rotateMatrix = new Snap.Matrix();
rotateMatrix.rotate(180,302,495);
this.spin = function() {
rect1.animate({transform:'r180,302,160'},5000,function(){
rect1.animate({transform:'r0,302,160'},5000);
});
rect2.animate({transform: rotateMatrix},5000,function(){
rect2.animate({transform: rotateMatrix.invert},5000);
});
}
So here I'm rotating two different rectangles. First one works fine and the second one is where i try to use matrix.
What am I doing wrong? Why is this happening? what is the meaning of life?
In the first one, you are animating a rotation. Snap knows it is a rotation because of the way you have defined it with Snap's special 'r' initialiser.
In the second one you are just passing a matrix that describes a transform from one orientation to another. Snap, and the browser, have no idea that it represents a rotation. So all it is doing is interpolating the values in the matrix.
I did this post asking your opinion about what JS library is better, or can do the work
that I have shown. Since I'm not allowed to do that here I did a research and tried out EaselJS to do the work. So my question now have changed.
I have this piece of code:
function handleImageLoad(event) {
var img = event.target
bmp = new createjs.Bitmap(img);
/*Matrix2D Transformation */
var a = 0.880114;
var b = 0.0679298;
var c = -0.053145;
var d = 0.954348;
var tx = 37.4898;
var ty = -16.5202;
var matrix = new createjs.Matrix2D(a, b, c, d, tx, ty);
var polygon = new createjs.Shape();
polygon.graphics.beginStroke("blue");
polygon.graphics.beginBitmapFill(img, "no-repeat", matrix).moveTo(37.49, -16.52).lineTo(336.27, -36.20).lineTo(350.96, 171.30).lineTo(50.73, 169.54).lineTo(37.49, -16.52);
stage.addChild(polygon);
stage.update();
}
where the variables a,b,c,tx and ty are values from a Homography matrix,
0.880114 0.067979298 37.4898
-0.053145 0.954348 -16.5202
-0.000344 1.0525-006 1
As you can see in attached files, I draw well a deformed rectangle but the image still doesn´t wrap the shape created. Anyone know how can I do it? There is a way better do to this? I'm doing something wrong?
Thanks for your time.
Edit: To be more specific I have added other image to see what I want.
You are attempting to do something similar to a perspective transform, using a 3x3 matrix.
Canvas's 2D context, and by extension EaselJS, only supports affine transformations with a 2x3 matrix - transformations where the opposite edges of the bounding rectangle remain parallel. For example, scaling, rotation, skewing, and translation.
http://en.wikipedia.org/wiki/Affine_transformation
You might be able to fake this with multiple objects that have been skewed (this was used extensively in Flash to fake perspective transforms), or you may have to look into another solution.
I'm working on a project that uses SVG with Raphael.js. One component is a group of circles, each of which "wiggles" around randomly - that is, slowly moves along the x and y axes a small amount, and in random directions. Think of it like putting a marble on your palm and shaking your palm around slowly.
Is anyone aware of a Raphael.js plugin or code example that already accomplishes something like this? I'm not terribly particular about the effect - it just needs to be subtle/smooth and continuous.
If I need to create something on my own, do you have any suggestions for how I might go about it? My initial idea is along these lines:
Draw a circle on the canvas.
Start a loop that:
Randomly finds x and y coordinates within some circular boundary anchored on the circle's center point.
Animates the circle from its current location to those coordinates over a random time interval, using in/out easing to smooth the effect.
My concern is that this might look too mechanical - i.e., I assume it will look more like the circle is tracing a star pattern, or having a a seizure, or something like that. Ideally it would curve smoothly through the random points that it generates, but that seems far more complex.
If you can recommend any other code (preferably JavaScript) that I could adapt, that would be great too - e.g., a jQuery plugin or the like. I found one named jquery-wiggle, but that seems to only work along one axis.
Thanks in advance for any advice!
Something like the following could do it:
var paper = Raphael('canvas', 300, 300);
var circle_count = 40;
var wbound = 10; // how far an element can wiggle.
var circleholder = paper.set();
function rdm(from, to){
return Math.floor(Math.random() * (to - from + 1) + from);
}
// add a wiggle method to elements
Raphael.el.wiggle = function() {
var newcx = this.attrs.origCx + rdm(-wbound, wbound);
var newcy = this.attrs.origCy + rdm(-wbound, wbound);
this.animate({cx: newcx, cy: newcy}, 500, '<');
}
// draw our circles
// hackish: setting circle.attrs.origCx
for (var i=0;i<circle_count;i++) {
var cx = rdm(0, 280);
var cy = rdm(0, 280);
var rad = rdm(0, 15);
var circle = paper.circle(cx, cy, rad);
circle.attrs.origCx = cx;
circle.attrs.origCy = cy;
circleholder.push(circle);
}
// loop over all circles and wiggle
function wiggleall() {
for (var i=0;i<circleholder.length;i++) {
circleholder[i].wiggle();
}
}
// call wiggleAll every second
setInterval(function() {wiggleall()}, 1000);
http://jsfiddle.net/UDWW6/1/
Changing the easing, and delays between certain things happening should at least help in making things look a little more natural. Hope that helps.
You can accomplish a similar effect by extending Raphael's default easing formulas:
Raphael.easing_formulas["wiggle"] = function(n) { return Math.random() * 5 };
[shape].animate({transform:"T1,1"}, 500, "wiggle", function(e) {
this.transform("T0,0");
});
Easing functions take a ratio of time elapsed to total time and manipulate it. The returned value is applied to the properties being animated.
This easing function ignores n and returns a random value. You can create any wiggle you like by playing with the return formula.
A callback function is necessary if you want the shape to end up back where it began, since applying a transformation that does not move the shape does not produce an animation. You'll probably have to alter the transformation values.
Hope this is useful!
There is a very good set of easing effects available in Raphael.
Here's a random set of circles that are "given" bounce easing.
Dynamically add animation to objects
The full range of easing effects can be found here. You can play around with them and reference the latest documentation at the same time.
Putting calls in a loop is not the thing to do, though. Use callbacks, which are readily available.