I'm creating an array of sounds with PositionalAudio:
var newVoice = new THREE.PositionalAudio(listener);
newVoice.setBuffer(buffer);
newVoice.setRefDistance(20);
newVoice.autoplay = true;
newVoice.setLoop(true);
voices.push(newVoice);
And I've attached these voices to cubes, but I want to only allow the user to hear the sound if they are facing the cube from an angle straight on of 30degress. Anything outside of a 30degree cone should be silent.
I see the documentation here but the only parameter that is working is the one i used 'setRefDistance.' The others do not work. I'm using r74.
Any ideas? The gist is here: https://gist.github.com/evejweinberg/949e297c34177199386f945549a45c06
Three.js Audio is a wrapper for the web audio API. You can apply all the settings to the panner, which is availiable using getOutput():
var sound = new THREE.PositionalAudio( listener );
var panner = sound.getOutput();
panner.coneInnerAngle = innerAngleInDegrees;
panner.coneOuterAngle = outerAngleInDegrees;
panner.coneOuterGain = outerGainFactor;
coneInnerAngle: A parameter for directional audio sources, this is an angle, inside of which there will be no volume reduction. The default value is 360.
coneOuterAngle: A parameter for directional audio sources, this is an angle, outside of which the volume will be reduced to a constant value of coneOuterGain. The default value is 360.
coneOuterGain: A parameter for directional audio sources, this is the amount of volume reduction outside of the coneOuterAngle. The default value is 0.
Sources:
https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html
http://www.html5rocks.com/en/tutorials/webaudio/positional_audio/
Related
I'm trying to modulate the volume of some sound based on the view direction between the camera and the sound. So if you are fully looking at the sound source the volume is 100%, if you turn away it is turned down.
Setting the built-in directionalCone, which links to the Panner Audio API is not what i want. This defines if audio is enabled while the player is positioned inside the cone, i'd like to to work based on the view direction.
I have something working in Aframe, by doing a dot between the camera view direction and direction between the player and audio clip. However this (for some reason) is quite expensive, i'm wondering if there is some built in functionality that i am overlooking.
tick: function() {
if(!this.sound.isPlaying) return; //todo: this is true even outside the spatial distance!
var camFwd = this.camFwd;
this.camera.object3D.getWorldPosition(camFwd);
var dir = this.dir;
this.el.object3D.getWorldPosition(dir);
dir.subVectors(
camFwd, //camera pos
dir //element pos
).normalize();
this.camera.object3D.getWorldDirection(camFwd);
var dot = THREE.Math.clamp(camFwd.dot(dir), 0, 1);
//float dot = Mathf.dot(transform.forward, (camTrans.position-transform.position).normalized);
this.setVolume(THREE.Math.lerp(
this.data.minVolume,
this.data.maxVolume,
dot));
},
This gives the intended effect, but it shows up in the performance profiler as quite expensive. Especialy the getWorldDirection for some reason is costly, eventhough the hierarchy itself is simple.
Especialy the getWorldDirection for some reason is costly
Object3D.getWorldPosition() and Object3D.getWorldDirection() always force a recomputation of the object's world matrix. Depending on the time when tick is executed, it is sufficient to do this:
camFwd.setFromMatrixPosition( this.camera.object3D.matrixWorld );
dir.setFromMatrixPosition( this.el.object3D.matrixWorld );
The code just extracts the position from the world matrix without updating it. You can use a similar approach for the direction vector although the code is a bit more complex:
var e = this.camera.object3D.matrixWorld.elements;
camFwd.set( e[ 8 ], e[ 9 ], e[ 10 ] ).normalize();
How can i split a stereo audio file (I'm currently working with a WAV, but I'm interested in how to do it for MP3 as well, if that's different) into left and right channels to feed into two separate Fast Fourier Transforms (FFT) from the P5.sound.js library.
I've written out what I think I need to be doing below in the code, but I haven't been able to find examples of anyone doing this through Google searches and all my layman's attempts are turning up nothing.
I'll share what I have below, but in all honesty, it's not much. Everything in question would go in the setup function where I've made a note:
//variable for the p5 sound object
var sound = null;
var playing = false;
function preload(){
sound = loadSound('assets/leftRight.wav');
}
function setup(){
createCanvas(windowWidth, windowHeight);
background(0);
// I need to do something here to split the audio and return a AudioNode for just
// the left stereo channel. I have a feeling it's something like
// feeding audio.getBlob() to a FileReader() and some manipulation and then converting
// the result of FileReader() to a web audio API source node and feeding that into
// fft.setInput() like justTheLeftChannel is below, but I'm not understanding how to work
// with javascript audio methods and createChannelSplitter() and the attempts I've made
// have just turned up nothing.
fft = new p5.FFT();
fft.setInput(justTheLeftChannel);
}
function draw(){
sound.pan(-1)
background(0);
push();
noFill();
stroke(255, 0, 0);
strokeWeight(2);
beginShape();
//calculate the waveform from the fft.
var wave = fft.waveform();
for (var i = 0; i < wave.length; i++){
//for each element of the waveform map it to screen
//coordinates and make a new vertex at the point.
var x = map(i, 0, wave.length, 0, width);
var y = map(wave[i], -1, 1, 0, height);
vertex(x, y);
}
endShape();
pop();
}
function mouseClicked(){
if (!playing){
sound.loop();
playing = true;
} else {
sound.stop();
playing = false;
}
}
Solution:
I'm not a p5.js expert, but I've worked with it enough that I figured there has to be a way to do this without the whole runaround of blobs / file reading. The docs aren't very helpful for complicated processing, so I dug around a little in the p5.Sound source code and this is what I came up with:
// left channel
sound.setBuffer([sound.buffer.getChannelData(0)]);
// right channel
sound.setBuffer([sound.buffer.getChannelData(1)]);
Here's a working example - clicking the canvas toggles between L/stereo/R audio playback and FFT visuals.
Explanation:
p5.SoundFile has a setBuffer method which can be used to modify the audio content of the sound file object in place. The function signature specifies that it accepts an array of buffer objects and if that array only has one item, it'll produce a mono source - which is already in the correct format to feed to the FFT! So how do we produce a buffer containing only one channel's data?
Throughout the source code there are examples of individual channel manipulation via sound.buffer.getChannelData(). I was wary of accessing undocumented properties at first, but it turns out that since p5.Sound uses the WebAudio API under the hood, this buffer is really just plain old WebAudio AudioBuffer, and the getChannelData method is well-documented.
The only downside of approach above is that setBuffer acts directly on the SoundFile so I'm loading the file again for each channel you want to separate, but I'm sure there's a workaround for that.
Happy splitting!
I would like to build a parallax effect from a 2D image using a depth map, similar to this, or this but using three.js.
Question is, where should I start with? Using just a PlaneGeometry with a MeshStandardMaterial renders my 2D image without parallax occlusion. Once I add my depth map as displacementMap property I can see some sort of displacement, but it is very low-res. (Maybe, since displacement maps are not meant to be used for this?)
My first attempt
import * as THREE from "three";
import image from "./Resources/Images/image.jpg";
import depth from "./Resources/Images/depth.jpg";
[...]
const geometry = new THREE.PlaneGeometry(200, 200, 10, 10);
const material = new THREE.MeshStandardMaterial();
const spriteMap = new THREE.TextureLoader().load(image);
const depthMap = new THREE.TextureLoader().load(depth);
material.map = spriteMap;
material.displacementMap = depthMap;
material.displacementScale = 20;
const plane = new THREE.Mesh(geometry, material);
Or should I use a Sprite object, which face always points to the camera? But how to apply the depth map to it then?
I've set up a codesandbox with what I've got so far. It also contains event listener for mouse movement and rotates the camera on movement as it is work in progress.
Update 1
So I figured out, that I seem to need a custom ShaderMaterial for this. After looking at pixijs's implementation I've found out, that it is based on a custom shader.
Since I have access to the source, all I need to do is rewrite it to be compatible with threejs. But the big question is: HOW
Would be awesome if someone could point me into the right direction, thanks!
I create three different linear chirps using the code found here on SO. With some other code snippets I save those three sounds as separate .wav files. This works so far.
Now I want to play those three sounds at the exact same time. So I thought I could use the WebAudio API, feed three oscillator nodes with the float arrays I got from the code above.
But I don't get at least one oscillator node to play its sound.
My code so far (shrinked for one oscillator)
var osc = audioCtx.createOscillator();
var sineData = linearChirp(freq, (freq + signalLength), signalLength, audioCtx.sampleRate); // linearChirp from link above
// sine values; add 0 at the front because the docs states that the first value is ignored
var imag = Float32Array.from(sineData.unshift(0));
var real = new Float32Array(imag.length); // cos values
var customWave = audioCtx.createPeriodicWave(real, imag);
osc.setPeriodicWave(customWave);
osc.start();
At the moment I suppose that I do not quite understand the whole the math behind the peridioc wave.
The code that plays the three sounds at the same time works (with simple sin values in the oscillator nodes), so I assume that the problem is my peridioc wave.
Another question: is there a different way? Maybe like using three MediaElementAudioSourceNode that are linked to my three .wav files. I don't see a way to play them at the exact same time.
The PeriodicWave isn't a "stick a waveform in here and it will be used as a single oscillation" feature - it builds a waveform through specifying the relative strengths of various harmonics. Note that in that code you pointed to, they create a BufferSource node and point its .buffer to the results of linearchirp(). You can do that, too - just use BufferSource nodes to play back the linearshirp() outputs, which (I think?) are just sine waves anyway? (If so, you could just use an oscillator and skip that whole messy "create a buffer" bit.)
If you just want to play back the buffers you've created, use BufferSource. If you want to create complex harmonics, use PeriodicWave. If you've created a single-cycle waveform and you want to play it back as a source waveform, use BufferSource and loop it.
In an HTML5 game I'm making, I play a "thud" sound when things collide. However, it is a bit unrealistic. No matter the velocity of the objects, they will always make the same, relatively loud "thud" sound. What I'd like to do is to have that sound's loudness depend on velocity, but how do I do that? I only know how to play a sound.
playSound = function(id)
{
sounds[id].play();
}
sounds is an array full of new Audio("url")'s.
Use the audio element's volume property. From W3:
The element's effective media volume is volume, interpreted relative
to the range 0.0 to 1.0, with 0.0 being silent, and 1.0 being the
loudest setting, values in between increasing in loudness. The range
need not be linear. The loudest setting may be lower than the system's
loudest possible setting; for example the user could have set a
maximum volume.
Ex: sounds[id].volume=.5;
You can even play around with the gain and make the volume play louder than 100%. You can use this function to amplify the sound:
function amplifyMedia(mediaElem, multiplier) {
var context = new (window.AudioContext || window.webkitAudioContext),
result = {
context: context,
source: context.createMediaElementSource(mediaElem),
gain: context.createGain(),
media: mediaElem,
amplify: function(multiplier) { result.gain.gain.value = multiplier; },
getAmpLevel: function() { return result.gain.gain.value; }
};
result.source.connect(result.gain);
result.gain.connect(context.destination);
result.amplify(multiplier);
return result;
}
You could do something like this to set the initial amplification to 100%:
var amp = amplifyMedia(sounds[id], 1);
Then if you need the sound to be twice as loud you could do something like this:
amp.amplify(2);
If you want to half it you can do this:
amp.amplify(0.5);
A full write up of the function is found here:
http://cwestblog.com/2017/08/17/html5-getting-more-volume-from-the-web-audio-api/
You can adjust the volume by setting:
setVolume = function(id,vol) {
sounds[id].volume = vol; // vol between 0 and 1
}
However, bear in mind that there is a small delay between the volume being set, and it taking effect. You may hear the sound start to play at the previous volume, then jump to the new one.