With the help of D3js i am generating splines from a set of points on a 2D canvas in HTML and JavaScript.
private spline = d3.line()
.x(p => p.x)
.y(p => p.y)
.curve(d3.curveBundle.beta(1));
This is added to a canvas with some native JS code:
ctx = [HTMLCanvasElement].getContext('2d'); // pseudo-code
ctx.beginPath();
this.spline(points);
ctx.stroke();
ctx.closePath();
This works fine in case I am rendering one path at a time. As soon as I am having multiple pathes with the same starting point, they are starting to overlap. And this causes some unwanted rendering effects: These paths are getting edgy and jaggy.
I've already tried to increae the context.imageSmoothingQuality = 'high' and played around with ctx.globalCompositeOperation without any help.
A possible solution would be IMHO to removed that overlapping part and "glue" the pathes from that point, when they are splitting. The redundant part would be removed and the side effects would be vanished, but this sounds like a hard programming story to figure out reliably where the splines begin.
Important note: These splines or pathes are rendered randomly. Only the starting and ending points are fixed. If you like, have a look at: http://b612-webdesign.de/kardion/guilloche/ (Hit "Aktualisieren" to generate a new graph)
„Minimal“ reproducable example: https://observablehq.com/#nextlevelshit/d3js-canvas-splines
You'll find at ObservableHQ an example of my issue. You can change the code and re-run the rendering at anytime.
Related
I am trying to combine WebGL earth with d3.geo.satellite projection.
I have managed to to overlay the 2 projections on top of each other and sync rotation, but I am having trouble to sync zooming. When I sync them to match size, WebGL projection gets deformed, but the d3.geo.satellite remains the same. I have tried different combination of projection.scale, projection.distance without much success.
Here is JS fiddle (it take a little while to load the resources). You can drag it to rotate (works well). But if you zoom in (use mousewheel) you can see the problem.
https://jsfiddle.net/nxtwrld/7x7dLj4n/2/
The important code is at the bottom of the script - the scale function.
function scale(){
var scale = d3.event.scale;
var ratio = scale/scale0;
// scale projection
projection.scale(scale);
// scale Three.js earth
earth.scale.x = earth.scale.y = earth.scale.z = ratio;
}
I do not using WebGL earth either , checking on your jsfiddle is not working anymore, and my assumption of your problem that you want to integrated D3.js with Threejs as a solution for 3d globe.
May I suggest you to try earthjs as your solution. Under the hood it use D3.js v4 & Threejs revision 8x both are the latest, and it can combine between Svg, canvas & threejs(WebGL).
const g = earthjs({padding:60})
.register(earthjs.plugins.mousePlugin())
.register(earthjs.plugins.threejsPlugin())
.register(earthjs.plugins.autorotatePlugin())
.register(earthjs.plugins.dropShadowSvg(),'dropshadow')
.register(earthjs.plugins.worldSvg('../d/world-110m.json'))
.register(earthjs.plugins.globeThreejs('../globe/world.jpg'))
g._.options.showLakes = false;
g.ready(function(){
g.create();
})
above snippet code you can run it from here.
I'm using easeljs and attempting to generate a simple water simulation based on this physics liquid demo. The issue I'm struggling with is the final step where the author states they "get hard edges". It is this step that merges the particles into an amorphous blob that gives the effect of cohesion and flow.
In case the link is no longer available, in summary, I've followed the simulation "steps" and created a prototype for particle liquid with the following :
Created a particle physics simulation
Added a blur filter
Apply a threshold to get "hard edges"
So I wrote some code that is using a threshold check to color red (0xFF0000) any shapes/particles that meet the criteria. In this case the criteria is any that have a color greater than RGB (0,0,200). If not, they are colored blue (0x0000FF).
var blurFilter = new createjs.BlurFilter(emitter.size, emitter.size*3, 1);
var thresholdFilter = new createjs.ThresholdFilter(0, 0, 200, 0xFF0000, 0x0000FF);
Note that only blue and red appear because of the previously mentioned threshold filter. For reference, the colors generated are RGB (0,0,0-255). The method r() simply generates a random number up to the passed in value.
graphics.beginFill(createjs.Graphics.getRGB(0,0,r(255)));
I'm using this idea of applying a threshold criteria so that I can later set some boundaries around the particle adhesion. My thought is that larger shapes would have greater "gravity".
You can see from the fountain of particles running below in the attached animated gif that I've completed Steps #1-2 above, but it is this Step #3 that I'm not certain how to apply. I haven't been able to identify a single filter that I could apply from easeljs that would transform the shapes or merge them in any way.
I was considering that I might be able to do a getBounds() and draw a new shape but they wouldn't truly be merged at that time. Nor would they exhibit the properties of liquid despite being larger and appearing to be combined.
bounds = blurFilter.getBounds(); // emitter.size+bounds.x, etc.
The issue really becomes how to define the shapes that are blurred in the image. Apart from squinting my eyes and using my imagination I haven't been able to come to a solution.
I also looked around for a solution to apply gravity between shapes so they could, perhaps, draw together and combine but maybe it's simply that easeljs is not the right tool for this.
Thanks for any thoughts on how I might approach this.
I have a set of Raphael paths that I want to animate when a separate path is hovered over.
The client sent an SVG illustration that I had converted into Raphael code. This is just a small section of the larger image.
What I'm trying to do is as follows:
starting canvas
Start with a set of path objects in a line. When you hover over the red path they animate in a spiral until they form
ending canvas
I've done some research, and I think that I'm going to have to animate each circle along a hidden path that depicts the arcs they have to move to get to the final spiral shape (Animate along a path), but I'm not sure if this is the most efficient way to create this animation.
On top of that, I'm not really sure how to calculate the angle and size of the hidden paths the circles would be following. What is going to be the best way to create this animation?
Thanks!
I think moving each circle along a hidden path is easiest way.
I would suggest to create a new SVG file in InkScape vector editor, draw circles in start and end positions and then connect them with arcs. Save the file. Then open the file in any text editor and copy all data to your JS code (copy path's "d" parameter and coordinates of circles).
It's been more than one year since it's asked but maybe it might help other people.
Using simple algebra thought in high school you can define two functions: linear and spiral.
function linear(i){
x = i*15;
y = x;
return {cx:x+offset,cy:y+offset};
}
function spiral(i){
r = 50 - i*5;
x = r*Math.cos(i);
y = r*Math.sin(i);
return {cx:x+offset,cy:y+offset};
}
This is just a 5 min fiddling, so it might seem cheap. Of course, you can define better algorithms using transform functions.
jsfiddle here
I am using the Three.JS library to display a point cloud in a web brower. The point cloud is generated once at start up and no further points are added or removed. But it does need to be rotated, panned and zoomed. I've gone through the tutorial about creating particles in three.js here
Using the example I can create particles that are squares or use an image of a sphere to create a texture. The image is closer to what I want, but is it possible to generate the point clouds without using the image? The sphere geometry for example.
The problem with the image is that when you have thousands of points it seems they sometimes obscure each other around the edges. From what I can gather it seems like the black region in a point's png file blocks the image immediately behind the current point. (But it is transparent to points further behind)
This obscuring of the images is the reason I would like to generate the points using shapes. I have tried replacing particles = new THREE.Geometry() with THREE.SphereGeometry(radius, segments, rings) and tried to change the vertices to spheres.
So my question is. How do I modify the example code so that it renders spheres (or points) instead of squares? Also, is a particle system the most efficient system for my particular case or should I just generate the particles and set their individual positions? As I mentioned I only generate the points once, but then rotate, zoom, pan the points. (I used the TrackBall sample code to get the mouse events working).
Thanks for your help
I don't think rendering a point cloud with spheres is very efficient. You should be able to get away with a particle system and use a texture or a small canvas program to draw a circle.
One of the first three.js sample uses a canvas program, here are the important bits:
var PI2 = Math.PI * 2;
var program = function ( context )
{
context.beginPath();
context.arc( 0, 0, 1, 0, PI2, true );
context.closePath();
context.fill();
};
var particle = new THREE.Particle( new THREE.ParticleCanvasMaterial( {
color: Math.random() * 0x808008 + 0x808080,
program: program
} ) );
Feel free to adapt the code for the WebGL renderer.
Another clever solution I've seen in the examples is using an encoded webm video to store the data and pass that to a GLSL shader which is rendered through a particle system in three.js
If your point cloud comes from a Kinect, these resources might be useful:
DepthCam
KinectJS
When comparing my code to http://threejs.org/examples/#webgl_custom_attributes_particles3
I saw the only difference was:
vec4 outColor = texture2D( texture, gl_PointCoord );
if ( outColor.a < 0.5 ) discard;
gl_FragColor = outColor;
Added to the fragment shader, fixed this problem for me.
It wasn't z fighting because randomly, some corners would overlap distant particles.
material.alphaTest = 0.5 didn't work and turning off depth writes/tests messed up the viewing order.
The problem with the image is that when you have thousands of points
it seems they sometimes obscure each other around the edges. From what
I can gather it seems like the black region in a point's png file
blocks the image immediately behind the current point. (But it is
transparent to points further behind)
You can get rid of the transparency overlapping problem of the underlying square structure by turning
depthTest:false
The problem then is, if you are adding additional objects to the scene the depth-testing will fail and the PointCloud will be rendered in front of the other objects, ignoring the actual order. To get around that you can additionally turn off
depthWrite:false
I want to check a collision between two Sprites in HTML5 canvas. So for the sake of the discussion, let's assume that both sprites are IMG objects and a collision means that the alpha channel is not 0. Now both of these sprites can have a rotation around the object's center but no other transformation in case this makes this any easier.
Now the obvious solution I came up with would be this:
calculate the transformation matrix for both
figure out a rough estimation of the area where the code should test (like offset of both + calculated extra space for the rotation)
for all the pixels in the intersecting rectangle, transform the coordinate and test the image at the calculated position (rounded to nearest neighbor) for the alpha channel. Then abort on first hit.
The problem I see with that is that a) there are no matrix classes in JavaScript which means I have to do that in JavaScript which could be quite slow, I have to test for collisions every frame which makes this pretty expensive. Furthermore I have to replicate something I already have to do on drawing (or what canvas does for me, setting up the matrices).
I wonder if I'm missing anything here and if there is an easier solution for collision detection.
I'm not a javascript coder but I'd imagine the same optimisation tricks work just as well for Javascript as they do for C++.
Just rotate the corners of the sprite instead of every pixel. Effectively you would be doing something like software texture mapping. You could work out the x,y position of a given pixel using various gradient information. Look up software texture mapping for more info.
If you quadtree decomposed the sprite into "hit" and "non-hit" areas then you could effectively check to see if a given quad tree decomposition is all "non-hit", "all hit" or "possible hit" (ie contains hits and non-hit pixels. The first 2 are trivial to pass through. In the last case you then go down to the next decomposition level and repeat the test. This way you only check the pixels you need too and for large areas of "non-hit" and "hit" you don't have to do such a complex set of checks.
Anyway thats just a couple of thoughts.
I have to replicate something I already have to do on drawing
Well, you could make a new rendering context, plot one rotated white-background mask to it, set the compositing operation to lighter and plot the other rotated mask on top at the given offset.
Now if there's a non-white pixel left, there's a hit. You'd still have to getImageData and sift through the pixels to find that out. You might be able to reduce that workload a bit by scaling the resultant image downwards (relying on anti-aliasing to keep some pixels non-white), but I'm thinking it's probably still going to be quite slow.
I have to test for collisions every frame which makes this pretty expensive.
Yeah, I think realistically you're going to be using precalculated collision tables. If you've got space for it, you could store one hit/no hit bit for every combination of sprite a, sprite b, relative rotation, relative-x-normalised-to-rotation and relative-y-normalised-to-rotation. Depending on how many sprites you have and how many steps of rotation or movement, this could get rather large.
A compromise would be to store the pre-rotated masks of each sprite in a JavaScript array (of Number, giving you 32 bits/pixels of easily &&-able data, or as a character in a Sring, giving you 16 bits) and && each line of intersecting sprite masks together.
Or, give up on pixels and start looking at eg. paths.
Same problem, an alternative solution. First I use getImageData data to find a polygon that surrounds the sprite. Careful here because the implementation works with images with transparent background that have a single solid object. Like a ship. The next step is Ramer Douglas Peucker Algorithm to reduce the number of vertices in the polygon. I finally get a polygon of very few vertices easy and cheap to rotate and check collisions with the other polygons for each sprite.
http://jsfiddle.net/rnrlabs/9dxSg/
var canvas = document.getElementById("canvas");
var context = canvas.getContext("2d");
var img = document.getElementById("img");
context.drawImage(img, 0,0);
var dat = context.getImageData(0,0,img.width, img.height);
// see jsfiddle
var startPixel = findStartPixel(dat, 0);
var path = followPath(startPixel, dat, 0);
// 4 is RDP epsilon
map1 = properRDP(path.map, 4, path.startpixel.x, path.startpixel.y);
// draw
context.beginPath();
context.moveTo(path.startpixel.x, path.startpixel.x);
for(var i = 0; i < map.length; i++) {
var p = map[i];
context.lineTo(p.x, p.y);
}
context.strokeStyle = 'red';
context.closePath();
context.stroke();