Background:
I am animating a circle down my web page to follow a certain path using GSAP's motion path plugin. I have scaled the path to ensure it follows the same proportions on all devices using this code:
let w = window.innerWidth;
let h = document.querySelector("main").offsetHeight;
const pathSpec = "M305.91,0c8.54,101.44,17.07,203.69,5.27,304.8s-45.5,202.26-112.37,279c-36,41.37-80.92,74.88-113.34,119.16-66.32,90.59-70.8,210.86-72.71,323.13C9.69,1206.79,9.48,1387.9,1.71,1568c-8,186.11,25.77,370.42,48.07,554.43q4.36,36,8.07,72.07c14.18,140.22,9.95,281.38,8.06,422-1.87,139.1,26.43,260.75,66.38,393.67l82.47,274.39,8.25,27.44,29.87,129c40.8,176.3,87,363.23,64.51,545.49-11,89.33-3.5,182.44-2.28,272.84l3.83,286.1c3.63,188.78,5.07,377.63,7.6,566.44l3.83,286.08c1.27,94.55-4,191.79,3.84,286,14.37,172.9-10,353.08-14.33,527.78q-7,283.21,1.14,566.55,8.14,281.61,31.34,562.53c7.43,90,21.31,180.76,26.7,270.47,5.6,93.24-4,190-6,283.47l-3.45,164.33";
const yScale = h / 8059.08;
const xScale = w / 380.28;
let newPath = "";
const pathBits = Array.from(pathSpec.matchAll(/([MmLlHhVvCcSsQqTtAa])([\d\.,\se-]*)/g));
console.log(pathBits);
pathBits.forEach((bit) => {
const command = bit[1];
const coordinates = bit[2];
newPath += command;
const coordArray = coordinates.split(/[,\s]/g);
// console.log("coordArray", coordArray);
newPath += coordArray.map((coord, index) => {
if (index % 2 == 0) {
return parseFloat(coord) * yScale;
}
else {
return parseFloat(coord) * xScale;
}
}).join(',');
});
So, essentially, I've created coordArray to split the array into the x and y coordinates and added it to newPath which is the scaled version. Then, I've added newPath to motionPath using:
const tlCircle = gsap.timeline();
tlCircle.to(".circle-animation", {
scrollTrigger: {
scroller: "main",
trigger: ".circle-animation",
start: "top 10%",
end: "max",
scrub: 1,
},
ease: "true",
motionPath: {
path: newPath,
autoRotate: true
}
});
When I put the pathSpec var in the path parameter, my circle animates but not when I put the newPath in, even though the web console shows it is in the same format as pathSpec but just scaled.
I've tried to debug this console logs to see what the input and outputs are which are what I want -- newPath is a. svg path in a string but scaled to fit my window but it doesn't work when I use it in motionPath.
Related
I am trying to create a struct that has a large enough data buffer to hold HTML5 canvas ImageData larger than 64 x 64 pixels. The struct and implementation are defined here in rust:
// src/lib.rs
use wasm_bindgen::prelude::*;
extern crate fixedbitset;
extern crate web_sys;
#[wasm_bindgen]
pub struct CanvasSource {
width: u32,
height: u32,
data: Vec<u8>,
}
#[wasm_bindgen]
impl CanvasSource {
pub fn width(&self) -> u32 {
self.width
}
pub fn height(&self) -> u32 {
self.height
}
// returns pointer to canvas image data
pub fn data(&self) -> *const u8 {
self.data.as_ptr()
}
pub fn cover_in_blood(&mut self) {
let blood: Vec<u8> = vec![252, 3, 27, 255];
let mut new_data = blood.clone();
for _ in 0..self.width {
for _ in 0..self.height {
let pixel = blood.clone();
new_data = [new_data, pixel].concat()
}
}
self.data = new_data;
}
pub fn new(width: u32, height: u32, initial_data: Vec<u8>) -> CanvasSource {
let data_size = (width * height) as usize;
let mut data = initial_data;
data.resize(data_size, 0);
CanvasSource {
width,
height,
data,
}
}
}
...and called from here in Typescript:
import { useEffect, useRef, useState } from "react";
import { CanvasSource } from "rust-canvas-prototype";
import { memory } from "rust-canvas-prototype/rust_canvas_prototype_bg.wasm";
import styles from "./DirectCanvas.module.css";
const getRenderLoop = (
source: CanvasSource,
ctx: CanvasRenderingContext2D,
) => {
if (source && ctx) {
const loop = () => {
const sourceDataPtr = source.data();
const width = source.width();
const height = source.height();
const regionSize = width * height * 4;
const pixelData = new Uint8ClampedArray(
memory.buffer,
sourceDataPtr,
regionSize
)
const imageData = new ImageData(pixelData, width, height);
ctx.putImageData(imageData, 0, 0)
};
return loop;
}
return null;
}
export function DirectCanvas() {
const [source, setSource] = useState<CanvasSource>();
const [ctx, setCtx] = useState<CanvasRenderingContext2D | null>(null);
const [paused, setPaused] = useState<boolean>(false);
// undefined on init, null when paused
const [animationId, setAnimationId] = useState<number>(0);
const canvasElement = useRef<HTMLCanvasElement>(null);
const initialized = source && ctx;
// initialization
useEffect(() => {
if (!source) {
console.log("loading source");
let [width, height] = [100, 100];
// uncomment below to cause error
// [width, height] = [358, 358]
setSource(CanvasSource.new(width, height, new Uint8Array([])))
}
if (source && !ctx && canvasElement.current) {
canvasElement.current.height = source.height();
canvasElement.current.width = source.width();
setCtx(canvasElement.current.getContext("2d"));
}
}, [source, ctx])
useEffect(() => {
if (initialized) {
const renderLoop = getRenderLoop(source, ctx);
if (renderLoop) {
renderLoop();
setTimeout(() => {
setAnimationId(prev => prev + 1);
}, 10)
}
}
}, [source, ctx, animationId]);
return (
<div className={styles.Container}>
<span className={styles.Controls}>
<button onClick={() => source?.cover_in_blood()}>Splatter</button>
</span>
<canvas ref={canvasElement}></canvas>
</div>
)
}
The function works correctly for sizes of ~100x100 or less, but once the total area begins to exceed that JS throws the following error:
Uncaught RangeError: attempting to construct out-of-bounds Uint8ClampedArray on ArrayBuffer
loop DirectCanvas.tsx:23
DirectCanvas DirectCanvas.tsx:77
...
Preliminary research suggests that it is a stack size problem on Rust's end, but attempts to increase the stack size in the config.toml throw errors of their own:
= note: rust-lld: error: unknown argument: -Wl,-zstack-size=29491200
How do I allocate a large enough stack size to paint to canvases larger than 100x100? (minimal reproducible example found here)
Welp, egg's on my face: the problem wasn't in fact on the rust side.
Uint8Array's default initialization size is just not large enough to hold ImageData larger than ~100x100. Ensuring the CanvasSource was initialized to the correct size did the trick:
// initialization
useEffect(() => {
if (!source) {
console.log("loading source");
[width, height] = [1000, 1000]
// the fixed line: added third argument
setSource(CanvasSource.new(width, height, new Uint8Array(width * height * 4)))
}
...
}, [source, ctx])
On the rust side, I did have a line of code that I thought would handle this:
// src/lib.rs
pub fn new(width: u32, height: u32, initial_data: Vec<u8>) -> CanvasSource {
let data_size = (width * height) as usize;
let mut data = initial_data;
// here
data.resize(data_size, 0);
CanvasSource {
width,
height,
data,
}
}
But, evidently, resizing it on the Rust end either (1) doesn't update the space that JS was looking for, or (2) I am misusing Vec.resize(). Both are possible. Thanks everyone for pointing stuff out.
I'm using Pixi with PixiOverlay on leaflet. I have the following jsfiddle for a dummy simulation. The objective: once you click Add Image 2 - it adds a picture of a hamster randomly on the map.
It (almost) work.
the problem:
Error message: "BaseTexture added to the cache with an id [hamster] that already had an entry"
I couldn't figure our where to put the loader and how to integrate it properly in terms of code organization: (do I need to use it only once?) what if I have other layers to add? So I assume my challenge is here:
this.loader.load((loader, resources) => {...}
Minor: how to reduce the size of the hamster :-)
my JS code (also on jsfiddle)
class Simulation
{
constructor()
{
// center of the map
var center = [1.8650, 51.2094];
// Create the map
this.map = L.map('map').setView(center, 2);
// Set up the OSM layer
L.tileLayer(
'http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', {
maxZoom: 18
}).addTo(this.map);
this.imagesLayer = new L.layerGroup();
this.imagesLayer.addTo(this.map);
}
_getRandomCoord()
{
var randLat = Math.floor(Math.random() * 90);
randLat *= Math.round(Math.random()) ? 1 : -1;
var randLon = Math.floor(Math.random() * 180);
randLon *= Math.round(Math.random()) ? 1 : -1;
return [randLat,randLon]
}
addImage2()
{
this.loader = new PIXI.Loader()
this.loader.add('hamster', 'https://cdn-icons-png.flaticon.com/512/196/196817.png')
this.loader.load((loader, resources) => {
let markerTexture = resources.hamster.texture
let markerLatLng = this._getRandomCoord()
let marker = new PIXI.Sprite(markerTexture)
marker.anchor.set(0.5, 1)
let pixiContainer = new PIXI.Container()
pixiContainer.addChild(marker)
let firstDraw = true
let prevZoom
let pixiOverlay = L.pixiOverlay(utils => {
let zoom = utils.getMap().getZoom()
let container = utils.getContainer()
let renderer = utils.getRenderer()
let project = utils.latLngToLayerPoint
let scale = utils.getScale()
if (firstDraw) {
let markerCoords = project(markerLatLng)
marker.x = markerCoords.x
marker.y = markerCoords.y
}
if (firstDraw || prevZoom !== zoom) {
marker.scale.set(1 / scale)
}
firstDraw = true
prevZoom = zoom
renderer.render(container)
}, pixiContainer)
this.imagesLayer.addLayer(pixiOverlay);
})
}
addTriangle()
{
console.log("Trinalge")
var polygonLatLngs = [
[51.509, -0.08],
[51.503, -0.06],
[51.51, -15.047],
[21.509, -0.08]
];
var projectedPolygon;
var triangle = new PIXI.Graphics();
var pixiContainer = new PIXI.Container();
pixiContainer.addChild(triangle);
var firstDraw = true;
var prevZoom;
var pixiOverlay = L.pixiOverlay(function(utils) {
var zoom = utils.getMap().getZoom();
var container = utils.getContainer();
var renderer = utils.getRenderer();
var project = utils.latLngToLayerPoint;
var scale = utils.getScale();
if (firstDraw) {
projectedPolygon = polygonLatLngs.map(function(coords) {return project(coords);});
}
if (firstDraw || prevZoom !== zoom) {
triangle.clear();
triangle.lineStyle(3 / scale, 0x3388ff, 1);
triangle.beginFill(0x3388ff, 0.2);
projectedPolygon.forEach(function(coords, index) {
if (index == 0) triangle.moveTo(coords.x, coords.y);
else triangle.lineTo(coords.x, coords.y);
});
triangle.endFill();
}
firstDraw = false;
prevZoom = zoom;
renderer.render(container);
}.bind(this), pixiContainer);
this.imagesLayer.addLayer(pixiOverlay)
}
removeLayer()
{
this.imagesLayer.clearLayers();
}
}
var simulation = new Simulation();
TLDR: Updated jsfiddle:
https://jsfiddle.net/gbsdfm97/
more info below:
First problem: loading resources (textures)
There was error in console because you loaded hamster image on each click:
addImage2()
{
this.loader = new PIXI.Loader()
this.loader.add('hamster', 'https://cdn-icons-png.flaticon.com/512/196/196817.png')
this.loader.load((loader, resources) => {
...
Better approach is to load image (resource) once at beginning and then just reuse what is loaded in memory:
constructor()
{
...
this.markerTexture = null;
this._loadPixiResources();
}
...
_loadPixiResources()
{
this.loader = new PIXI.Loader()
this.loader.add('hamster', 'https://cdn-icons-png.flaticon.com/512/196/196817.png')
this.loader.load((loader, resources) => {
this.markerTexture = resources.hamster.texture;
})
}
...
addImage2()
{
...
let marker = new PIXI.Sprite(this.markerTexture);
Second problem: size of hamsters :)
Scale was set like this:
marker.scale.set(1 / scale)
Which was too big - so changed it to:
// affects size of hamsters:
this.scaleFactor = 0.05;
...
marker.scale.set(this.scaleFactor / scale);
Scale of hamsters (not triangles!) is now updated when zoom changes - so when user uses mouse scroll wheel etc.
Third problem: too many layers in pixiOverlay
Previously on each click on Add Image 2 or Add Triangle button there was added new pixiContainer and new pixiOverlay which was added as new layer: this.imagesLayer.addLayer(pixiOverlay);
New version is a bit simplified: there is only one pixiContainer and one pixiOverlay created at beginning:
constructor()
{
...
// Create one Pixi container for pixiOverlay in which we will keep hamsters and triangles:
this.pixiContainer = new PIXI.Container();
let prevZoom;
// Create one pixiOverlay:
this.pixiOverlay = L.pixiOverlay((utils, data) => {
...
}, this.pixiContainer)
this.imagesLayer.addLayer(this.pixiOverlay);
}
this.pixiOverlay is added as one layer
then in rest of program we reuse this.pixiOverlay
also we reuse this.pixiContainer because it is returned from utils - see:
let container = utils.getContainer() // <-- this is our "this.pixiContainer"
...
container.addChild(marker)
renderer.render(container)
Bonus: Triangles
Now you can add many triangles - one per each click.
Note: triangles do not change scale - this is a difference compared to hamsters.
I have got the following code:
const linksGraphics = new PIXI.Graphics();
const update = () => {
linksGraphics.clear();
linksGraphics.alpha = 1;
if (forceLinkActive) {
data.links.forEach(link => {
let { source, target } = link;
linksGraphics.lineStyle(2, 0x000000);
linksGraphics.moveTo(source.x, source.y);
linksGraphics.lineTo(target.x, target.y);
});
linksGraphics.endFill();
} }
app.ticker.add( () => update() );
Where data.links is an array of edge data {source: number, target: number}. If I understand right, all lines are part of the PIXI.Graphics object. But what I need:
every line should have own opacity
every line should have an event for mouse over
Any ideas how modify my code?
Thanks.
It's been a while but can make a suggestion. Lines do not react to mouse/pointer over events in pixijs.
Instead you may want to accompany a transformed rectangle with alpha value 0 and listen mouse/pointer with this rectangle.
For example lets, change the alpha value of the line when mouse/pointer hovers the accompanying rectangle.
const app = new PIXI.Application({
width: window.innerWidth,
height: window.innerHeight,
backgroundColor: 0x283230
});
document.body.appendChild(app.view);
// 1. PRELIMINARY COMPUTATIONS
// Coordinates of the end points of a line
let x0 = 100;
let y0 = 100;
let x1 = 200;
let y1 = 200;
// Find midpoint for translation
let xmid = 0.5*(x0+x1);
let ymid = 0.5*(y0+y1);
// Length of the line
let length = Math.hypot(x0-x1, y0-y1);
// Alignment angle of the line, i.e. angle with the x axis
let angle = Math.atan((y1-y0)/(x1-x0));
// 2. LINE
line = new PIXI.Graphics();
// Arbitrary line style, say we have a non-white background
line.lineStyle(8,0xffffff,1);
line.moveTo(x0,y0);
line.lineTo(x1,y1);
// 3. ACCOMPANYING RECTANGLE
line.rectangle = new PIXI.Graphics();
line.rectangle.beginFill(0xffffff);
// Since we are going to translate, think of 0,0 is the center point on the rectangle
// Width of the rectangle is selected arbitrarily as 30
const width = 30;
line.rectangle.drawRect(-length/2,-width/2,length,width);
line.rectangle.endFill();
line.rectangle.alpha = 0;
line.rectangle.interactive = true;
line.rectangle.on("pointerover", reactOver);
line.rectangle.on("pointerout", reactOut);
// Apply transformation
line.rectangle.setTransform(xmid, ymid,1,1,angle);
app.stage.addChild(line);
// Add rectangle to the stage too.
app.stage.addChild(line.rectangle);
// Let's change alpha value of the line when user hovers.
function reactOver(){
line.alpha = 0.5;
}
function reactOut(){
line.alpha = 1;
}
To the PEN, Hover a line in pixijs
We can expand this logic to a rectangle for instance. But this time you need two accompanying rectangles (with alpha=0) where one of them is wider and the other is narrower than the unfilled rectangle. For example,
const app = new PIXI.Application({
width: window.innerWidth,
height: window.innerHeight,
backgroundColor: 0x283230
});
document.body.appendChild(app.view);
const x = 100;
const y = 100;
const width = 150;
const height = 100;
const hoverWidth = 20;
const rect = new PIXI.Graphics();
rect.lineStyle(4, 0xffffff,1);
rect.drawRect(x,y,width,height);
rect.outer = new PIXI.Graphics();
rect.inner = new PIXI.Graphics();
// Fill outer
rect.outer.alpha = 0;
rect.outer.beginFill(0xffffff);
rect.outer.drawRect(x-hoverWidth/2, y-hoverWidth/2, width+hoverWidth, height+hoverWidth);
rect.outer.endFill();
// Fill inner
rect.inner.alpha = 0;
rect.inner.beginFill(0xffffff);
rect.inner.drawRect(x+hoverWidth/2, y+hoverWidth/2, width-hoverWidth, height-hoverWidth);
rect.inner.endFill();
// Add interaction and listeners
rect.outer.interactive = true;
rect.inner.interactive = true;
rect.outer.on("pointerover", pOverOuter);
rect.outer.on("pointerout", pOutOuter);
rect.inner.interaction = true;
rect.inner.on("pointerover", pOverInner);
rect.inner.on("pointerout", pOutInner);
app.stage.addChild(rect);
app.stage.addChild(rect.outer);
app.stage.addChild(rect.inner);
// Listeners
let overOuter = false;
let overInner = false;
function pOverOuter(){
overOuter = true;
changeAlpha();
// rect.alpha = 0.5;
}
function pOutOuter(){
overOuter = false;
changeAlpha();
}
function pOverInner(){
overInner = true;
changeAlpha();
// rect.alpha = 1;
}
function pOutInner(){
overInner = false;
changeAlpha();
// rect.alpha = 0.5;
}
function changeAlpha(){
rect.alpha = (overOuter && !overInner)? 0.5: 1;
}
To the PEN, Hover a rectangle in pixijs
For your first requirement, try creating separate graphics objects for drawing each line and set alpha for each line.
For your second requirement, You need to set the interactive property of graphics (linksGraphics) object to true like below,
linksGraphics.interactive = true;
and then attach a function to be executed on mouseover event like below,
var mouseOverAction = function () {
//Some code
};
linksGraphics.on('mouseover', mouseOverAction);
You can define a hitArea on a graphic. And with getBounds() you can make a line clickable. After you do that you can also assign pointerEvents to the graphic.
const linksGraphics = new PIXI.Graphics();
const update = () => {
linksGraphics.clear();
linksGraphics.alpha = 1;
if (forceLinkActive) {
data.links.forEach(link => {
let { source, target } = link;
linksGraphics.lineStyle(2, 0x000000);
linksGraphics.moveTo(source.x, source.y);
linksGraphics.lineTo(target.x, target.y);
//A line itself is not clickable
linksGraphics.hitArea = linksGraphics.getBounds();
});
linksGraphics.endFill();
}
}
app.ticker.add( () => update() );
I'm trying to implement semantic zooming while using Mike Bostock's Towards Reusable Charts pattern (where a chart is represented as a function). In my zoom handler, I'd like to use transform.rescaleX to update my scale and then simply call the function again.
It almost works but the rescaling seems to accumulate zoom transforms getting faster and faster. Here's my fiddle:
function chart() {
let aspectRatio = 10.33;
let margin = { top: 0, right: 0, bottom: 5, left: 0 };
let current = new Date();
let scaleBand = d3.scaleBand().padding(.2);
let scaleTime = d3.scaleTime().domain([d3.timeDay(current), d3.timeDay.ceil(current)]);
let axis = d3.axisBottom(scaleTime);
let daysThisMonth = d3.timeDay.count(d3.timeMonth(current), d3.timeMonth.ceil(current));
let clipTypes = [ClipType.Scheduled, ClipType.Alarm, ClipType.Motion];
let zoom = d3.zoom().scaleExtent([1 / daysThisMonth, 1440]);
let result = function(selection) {
selection.each(function(data) {
let selection = d3.select(this);
let outerWidth = this.getBoundingClientRect().width;
let outerHeight = outerWidth / aspectRatio;
let width = outerWidth - margin.left - margin.right;
let height = outerHeight - margin.top - margin.bottom;
scaleBand.domain(d3.range(data.length)).range([0, height * .8]);
scaleTime.range([0, width]);
zoom.on('zoom', _ => {
scaleTime = d3.event.transform.rescaleX(scaleTime);
selection.call(result);
});
let svg = selection.selectAll('svg').data([data]);
let svgEnter = svg.enter().append('svg').attr('viewBox', '0 0 ' + outerWidth + ' ' + outerHeight);//.attr('preserveAspectRatio', 'xMidYMin slice');
svg = svg.merge(svgEnter);
let defsEnter = svgEnter.append('defs');
let defs = svg.select('defs');
let gMainEnter = svgEnter.append('g').attr('id', 'main');
let gMain = svg.select('g#main').attr('transform', 'translate(' + margin.left + ' ' + margin.top + ')');
let gAxisEnter = gMainEnter.append('g').attr('id', 'axis');
let gAxis = gMain.select('g#axis').call(axis.scale(scaleTime));
let gCameraContainerEnter = gMainEnter.append('g').attr('id', 'camera-container');
let gCameraContainer = gMain.select('g#camera-container').attr('transform', 'translate(' + 0 + ' ' + height * .2 + ')').call(zoom);
let gCameraRowsEnter = gCameraContainerEnter.append('g').attr('id', 'camera-rows');
let gCameraRows = gCameraContainer.select('g#camera-rows');
let gCameras = gCameraRows.selectAll('g.camera').data(d => {
return d;
});
let gCamerasEnter = gCameras.enter().append('g').attr('class', 'camera');
gCameras = gCameras.merge(gCamerasEnter);
gCameras.exit().remove();
let rectClips = gCameras.selectAll('rect.clip').data(d => {
return d.clips.filter(clip => {
return clipTypes.indexOf(clip.type) !== -1;
});
});
let rectClipsEnter = rectClips.enter().append('rect').attr('class', 'clip').attr('height', _ => {
return scaleBand.bandwidth();
}).attr('y', (d, i, g) => {
return scaleBand(Array.prototype.indexOf.call(g[i].parentNode.parentNode.childNodes, g[i].parentNode)); //TODO: sloppy
}).style('fill', d => {
switch(d.type) {
case ClipType.Scheduled:
return '#0F0';
case ClipType.Alarm:
return '#FF0';
case ClipType.Motion:
return '#F00';
};
});
rectClips = rectClips.merge(rectClipsEnter).attr('width', d => {
return scaleTime(d.endTime) - scaleTime(d.startTime);
}).attr('x', d => {
return scaleTime(d.startTime);
});
rectClips.exit().remove();
let rectBehaviorEnter = gCameraContainerEnter.append('rect').attr('id', 'behavior').style('fill', '#000').style('opacity', 0);
let rectBehavior = gCameraContainer.select('rect#behavior').attr('width', width).attr('height', height * .8);//.call(zoom);
});
};
return result;
}
// data model
let ClipType = {
Scheduled: 0,
Alarm: 1,
Motion: 2
};
let data = [{
id: 1,
src: "assets/1.jpg",
name: "Camera 1",
server: 1
}, {
id: 2,
src: "assets/2.jpg",
name: "Camera 2",
server: 1
}, {
id: 3,
src: "assets/1.jpg",
name: "Camera 3",
server: 2
}, {
id: 4,
src: "assets/1.jpg",
name: "Camera 4",
server: 2
}].map((_ => {
let current = new Date();
let randomClips = d3.randomUniform(24);
let randomTimeSkew = d3.randomUniform(-30, 30);
let randomType = d3.randomUniform(3);
return camera => {
camera.clips = d3.timeHour.every(Math.ceil(24 / randomClips())).range(d3.timeDay.offset(current, -30), d3.timeDay(d3.timeDay.offset(current, 1))).map((d, indexEndTime, g) => {
return {
startTime: indexEndTime === 0 ? d : d3.timeMinute.offset(d, randomTimeSkew()),
endTime: indexEndTime === g.length - 1 ? d3.timeDay(d3.timeDay.offset(current, 1)) : null,
type: Math.floor(randomType())
};
}).map((d, indexStartTime, g) => {
if(d.endTime === null)
d.endTime = g[indexStartTime + 1].startTime;
return d;
});
return camera;
};
})());
let myChart = chart();
let selection = d3.select('div#container');
selection.datum(data).call(myChart);
<div id="container"></div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/4.13.0/d3.min.js"></script>
Edit: The zoom handler below works fine, but I'd like a more general solution:
let newScaleTime = d3.event.transform.rescaleX(scaleTime);
d3.select('g#axis').call(axis.scale(newScaleTime));
d3.selectAll('rect.clip').attr('width', d => {
return newScaleTime(d.endTime) - newScaleTime(d.startTime);
}).attr('x', d => {
return newScaleTime(d.startTime);
});
The short answer is you need to implement a reference scale to indicate what the scale's base state is when unmanipulated by the zoom. Otherwise you will run into the problem you describe: "It almost works but the rescaling seems to accumulate zoom transforms getting faster and faster. "
To see why a reference scale is needed, zoom in on the graph and out (once each) without moving the mouse. When you zoom in, the axis changes. When you zoom out the axis does not. Note the scale factor on the intial zoom in and the first time you zoom out: 1.6471820345351462 on the zoom in, 1 on the zoom out. The number represents how much the to magnify/minify whatever it is we are zooming in on. On the initial zoom in we magnify by a factor of ~1.65. On the preceding zoom out we minify by a factor of 1, ie: not at all. If on the other hand you zoom out first, you minify by a factor of about 0.6 and then if you were to zoom in you magnify by a factor of 1. I've built a stripped down of your example to show this:
function chart() {
let zoom = d3.zoom().scaleExtent([0.25,20]);
let scale = d3.scaleLinear().domain([0,1000]).range([0,550]);
let axis = d3.axisBottom;
let result = function(selection) {
selection.each(function() {
let selection = d3.select(this);
selection.call(axis(scale));
selection.call(zoom);
zoom.on('zoom', function() {
scale = d3.event.transform.rescaleX(scale);
console.log(d3.event.transform.k);
selection.call(result);
});
})
}
return result;
}
d3.select("svg").call(chart());
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/4.10.0/d3.min.js"></script>
<svg width="550" height="200"></svg>
The scale should be relative to the initial zoom factor, usually 1. In otherwords, the zoom is cumulative, it records magnification/minification as a factor of the initial scale, not the last step (otherwise transform k values would only be one of three values: one value for zooming out, another for zooming in and one for remaining the same and all relative to the current scale). This is why rescaling the initial scale doesn't work - you lose the reference point to the initial scale that the zoom is referencing.
From the docs, if you redefine a scale with d3.event.transform.rescaleX, we get a scale that reflects the zoom's (cumulative) transformation:
[the rescaleX] method does not modify the input scale x; x thus
represents the untransformed scale, while the returned scale
represents its transformed view. (docs)
Building on this, if we zoom in twice in a row, the first time we zoom in we see the transform.k value is ~1.6x on the first time, the second time it is ~2.7x. But, since we rescale the scale, we apply a zoom of 2.7x on a scale that has already been zoomed in 1.6x, giving us a scale factor of ~4.5x rather than 2.7x. To make matters worse, if we zoom in twice and then out once, the zoom (out) event gives us a scale value that is still greater than 1 (~1.6 on first zoom in, ~2.7 on second, ~1.6 on zoom out), hence we are still zooming in despite scrolling out:
function chart() {
let zoom = d3.zoom().scaleExtent([0.25,20]);
let scale = d3.scaleLinear().domain([0,1000]).range([0,550]);
let axis = d3.axisBottom;
let result = function(selection) {
selection.each(function() {
let selection = d3.select(this);
selection.call(axis(scale));
selection.call(zoom);
zoom.on('zoom', function() {
scale = d3.event.transform.rescaleX(scale);
var magnification = 1000/(scale.domain()[1] - scale.domain()[0]);
console.log("Actual magnification: "+magnification+"x");
console.log("Intended magnification: "+d3.event.transform.k+"x")
console.log("---");
selection.call(result);
});
})
}
return result;
}
d3.select("svg").call(chart());
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/4.10.0/d3.min.js"></script>
<svg width="550" height="200"></svg>
I haven't discussed the x offset portion of the zoom, but you can imagine that a similar problem occurs - the zoom is cumulative but you lose the initial reference point that those cumulative changes are in reference to.
The idiomatic solution is to use a reference scale and the zoom to create a working scale used for plotting rectangles/axes/etc. The working scale is initially the same as the reference scale (generally) and is set as so: workingScale = d3.event.transform.rescaleX(referenceScale) on each zoom.
function chart() {
let zoom = d3.zoom().scaleExtent([0.25,20]);
let workingScale = d3.scaleLinear().domain([0,1000]).range([0,550]);
let referenceScale = d3.scaleLinear().domain([0,1000]).range([0,550]);
let axis = d3.axisBottom;
let result = function(selection) {
selection.each(function() {
let selection = d3.select(this);
selection.call(axis(workingScale));
selection.call(zoom);
zoom.on('zoom', function() {
workingScale = d3.event.transform.rescaleX(referenceScale);
var magnification = 1000/(workingScale.domain()[1] - workingScale.domain()[0]);
console.log("Actual magnification: "+magnification+"x");
console.log("Intended magnification: "+d3.event.transform.k+"x")
console.log("---");
selection.call(result);
});
})
}
return result;
}
d3.select("svg").call(chart());
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/4.10.0/d3.min.js"></script>
<svg width="550" height="200"></svg>
I'm trying to do a path tween like this one: https://bl.ocks.org/mbostock/3916621
Problem: The path is changing the side. The gray path is changing from left to right and the white path is changing from bottom to top. This isn't the expected transition!
Edit: My expected transition should be a simple grow transition. So the small one should grow to the bigger one.
Example: https://jsfiddle.net/wdv3rufs/
const PATHS = {
FULL: {
GRAY: 'M1035,429l-4.6-73.7L1092,223l-66,1l-66.3-36.4l-102.5,67.6L623.8,0L467.4,302.1l-218.7-82.9L77.6,317.4L0,214.5V429H1035z',
WHITE: 'M0,429V292l249.4-72.9l135.4,56.6L623.8,0L824,232.5l135.7-44.9l26.7,190.5l29.3,16.8l19.3,34H0z'
},
SMALL: {
GRAY: 'M0,429h834l-134.2-34.6l-37,23.2l-130.6-35.2l-112.7,33.5l-144.2-67l-96.7,65.2L43.5,377.5L0,391.4V429z',
WHITE: 'M0,429h834l-134.2-34.6l-83.5,29l-84.1-41l-126.9,31.2l-130-64.7l-144.8,58.6l-87-30L0,386.1V429z'
}
};
const pathTween = (d1, precision) => {
return function() {
const path0 = this;
const path1 = path0.cloneNode();
const n0 = path0.getTotalLength();
const n1 = (path1.setAttribute('d', d1), path1).getTotalLength();
const distances = [0];
const dt = precision / Math.max(n0, n1);
let i = 0;
while ((i += dt) < 1) distances.push(i);
distances.push(1);
const points = distances.map(t => {
const p0 = path0.getPointAtLength(t * n0);
const p1 = path1.getPointAtLength(t * n1);
return d3.interpolate([p0.x, p0.y], [p1.x, p1.y]);
});
return t => {
return t < 1 ? 'M' + points.map(p => p(t)).join('L') : d1;
};
};
};
const pathTransition = (path, d1) => {
path.transition()
.duration(10000)
.attrTween('d', pathTween(d1, 4));
}
var svg = d3.select('svg');
svg.append('path')
.attr('class', 'white-mountain')
.attr('d', PATHS.SMALL.WHITE)
.call(pathTransition, PATHS.FULL.WHITE);
svg.append('path')
.attr('class', 'gray-mountain')
.attr('d', PATHS.SMALL.GRAY)
.call(pathTransition, PATHS.FULL.GRAY);
How can I get this working?