How can I find out if a WebAudio oscillator is silent? - javascript

Is it possible to find out when a WebAudio oscillator is silent, and then call its stop method?
My reason for asking this is because, if you don't call stop on an oscillator, it hangs around in memory indefinitely. But, oscillators don't have a length or duration, so there's no way to find out if the sound it's producing has finished so that you can call stop when it's done. So I wonder if there's a way to test whether or not the oscillator is producing any audible sound, or is silent?

You can put an analyser between the oscillator and its output:
var size = 2048;
var analyser = audioCtx.createAnalyser();
var data = new Float32Array(size);
analyser.fftSize = size;
theOscillator.connect(analyser);
analyser.connect(theOutput);
var silenceChecker = setInterval(function() {
analyser.getFloatTimeDomainData(data);
for (var i = 0; i < size; ++i) {
if (data[i] !== 0) return;
}
// It is silent.
clearInterval(silenceChecker);
theOscillator.stop();
theOscillator.disconnect();
analyser.disconnect();
}, Math.floor(size / audioCtx.sampleRate * 1000));
Note that this is a dumb algorithm that can only detect pure silence, not whether the oscillator is so quiet that it is effectively silent. For that you need to run significantly more complex algorithm, and probably not even in the main thread.

As you already said, oscillators don't have a length or duration, then you or someone else should tell it to stop.
You can set a timeout and stop oscillator after x seconds or bind the stop action to a button in the interface or a key press/release.

Related

How to attach sound effects to an AudioBuffer

I'm trying to add the following sound effects to some audio files, then grab their audio buffers and convert to .mp3 format
Fade-out the first track
Fade in the following tracks
A background track (and make it background by giving it a small gain node)
Another track that will serve as the more audible of both merged tracks
Fade out the previous one and fade in the first track again
I observed the effects that are returned by the AudioParam class as well as those from the GainNode interface are attached to the context's destination instead of to the buffer itself. Is there a technique to espouse the AudioParam instance values (or the gain property) to the buffers so when I merge them into one ultimate buffer, they can still retain those effects? Or do those effects only have meaning to the destination (meaning I must connect on the sourceNode) and output them via OfflineContexts/startRendering? I tried this method previously and was told on my immediate last thread that I only needed one BaseAudioContext and it didn't have to be an OfflineContext. I think to have various effects on various files, I need several contexts, thus I'm stuck in the dilemma of various AudioParams and GainNodes but directly implicitly invoking them by calling start will inadvertently lose their potency.
The following snippets demonstrate the effects I'm referring to, while the full code can be found at https://jsfiddle.net/ka830Lqq/3/
var beginNodeGain = overallContext.createGain(); // Create a gain node
beginNodeGain.gain.setValueAtTime(1.0, buffer.duration - 3); // make the volume high 3 secs before the end
beginNodeGain.gain.exponentialRampToValueAtTime(0.01, buffer.duration); // reduce the volume to the minimum for the duration of expRampTime - setValTime i.e 3
// connect the AudioBufferSourceNode to the gainNode and the gainNode to the destination
begin.connect(beginNodeGain);
Another snippet goes thus
function handleBg (bgBuff) {
var bgContext = new OfflineAudioContext(bgBuff.numberOfChannels, finalBuff[0].length, bgBuff.sampleRate), // using a new context here so we can utilize its individual gains
bgAudBuff = bgContext.createBuffer(bgBuff.numberOfChannels, finalBuff[0].length, bgBuff.sampleRate),
bgGainNode = bgContext.createGain(),
smoothTrans = new Float32Array(3);
smoothTrans[0] = overallContext.currentTime; // should be 0, to usher it in but is actually between 5 and 6
smoothTrans[1] = 1.0;
smoothTrans[2] = 0.4; // make it flow in the background
bgGainNode.gain.setValueAtTime(0, 0); //currentTime here is 6.something-something
bgGainNode.gain.setValueCurveAtTime(smoothTrans, 0, finalBuff.pop().duration); // start at `param 2` and last for `param 3` seconds. bgBuff.duration
for (var channel = 0; channel < bgBuff.numberOfChannels; channel++) {
for (var j = 0; j < finalBuff[0].length; j++) {
var data = bgBuff.getChannelData(channel),
loopBuff = bgAudBuff.getChannelData(channel),
oldBuff = data[j] != void(0) ? data[j] : data[j - data.length];
loopBuff[j] = oldBuff;
}
}
// instead of piping them to the output speakers, merge them
mixArr.push(bgAudBuff);
gottenBgBuff.then(handleBody());
}

javascript web audio analyser : getByteFrequencyData at a precise time?

Based on the web audio analyser API, I am creating an audio animation that draws images based on the real time frequency spectrum (like the classical bar graphics that move to the frequency of the sound, except that it is not bars that are drawn but something more complex).
It works fine, my only issue is that I am not able to stop the image at a precise time.
When I want to have it stopped at let's say 5 seconds, then some times it stops at 5.000021, or 5.000013, or 5.0000098, ...
and the problem is that the frequency spectrum (and so my image based on this frequency spectrum) is not the same at 5.000021, or 5.000013, or 5.0000098, ...
This means that the user when he wants to see the image corresponding to 5s, every time he sees a slightly different image, and I would like to have only one image corresponding to 5s (often the image is only slightly different at every try, but sometimes the differences are quite huge).
Here are extracts of my code:
var ctx = new AudioContext();
var soundmp3 = document.getElementById('soundmp3');
soundmp3.src = URL.createObjectURL(this.files[0]);
var audioSrc = ctx.createMediaElementSource(soundmp3);
var analyser = ctx.createAnalyser();
analyser.fftSize = 2048;
analyser.smoothingTimeConstant = 0.95;
audioSrc.connect(analyser);
audioSrc.connect(ctx.destination);
var frequencyData = new Uint8Array(analyser.frequencyBinCount);
function renderFrame() {
if(framespaused) return;
drawoneframe();
requestAnimationFrame(renderFrame);
};
function drawoneframe(){
analyser.getByteFrequencyData(frequencyData);
// drawing of my image ...
};
function gotomp3(timevalue){
soundmp3.pause();
newtime = timevalue;
backtime = newtime - 0.2000;
soundmp3.currentTime = backtime;
soundmp3.play();
function boucle(){
if(soundmp3.currentTime >= timevalue){
if(Math.abs(soundmp3.currentTime-newtime) <= 0.0001){
drawoneframe();
soundmp3.pause();
soundmp3.currentTime = timeatgetfrequency;
return;
} else {
soundmp3.pause();
soundmp3.currentTime = backtime;
soundmp3.play();
};
}
setTimeout(boucle, 1);
};
boucle();
};
document.getElementById("fieldtime").onkeydown = function (event) {if (event.which == 13 || event.keyCode == 13) {
gotomp3(document.getElementById("fieldtime").value);
return false;
}
return true;
};
Code explanation: if the user enters a value in the "fieldtime" (= newtime) and hits enter, then the I go first 0.2s back, start playing and stop when the currentTime is very close to newtime (I have to go back first, because when I go directly to newtime and stop immediately afterwards then the analyser.getByteFrequencyData does not yet have the values at newtime). With boucle() I manage to get it stopped at very precise times: if newtime = 5, then the time when drawoneframe(); is called is 5.000xx but the problem is that every time the user enters 5 as newtime, the image that is shown is slightly different.
So my question: has someone an idea how I could achieve that every time the user enters the same time as newtime, the image will be exactly the same ?
I am not quite aware at which times the soundmp3.currentTime is updated ? With a samplerate of 44.1kHz, I guess it is something like every 0.0227ms, but does this mean that it is updated exactly at 0, 0.0227ms, 0.0454ms, ...or just approximately ?
I thought about smoothing the analyser results, so that there are less variations for small time variations. Setting analyser.smoothingTimeConstant to a value close to 1 is not sufficient. But maybe there is another way to do it.
I do not need high precision results, I just would like that if a user wants to see the image corresponding to x seconds, then each time he enters x seconds, he sees exactly the same image.
Thank you very much for any help.
Mireille

Web Audio: Change gain value before oscillator starts

I just noticed that it seems not possible to change the gain.value of a gainNode with the method setValueAtTime() or setValueCurveAtTime() when there is no oscillator connected or when the oscillator has not started yet.
setValueAtTime after that the oscillator starts
For instance in this case, setValueAtTime will work as expected:
var context = new AudioContext();
var gain = context.createGain();
gain.connect(context.destination);
var osc = context.createOscillator();
osc.frequency.value = 300;
osc.connect(gain);
osc.start();
gain.gain.setValueAtTime(0, context.currentTime + 1);
The oscillator starts and the gain is 1 for 1 second. Then gain.gain.value will move to 0.
setValueAtTime before that the oscillator starts
However if we set the gain with setValueAtTime before the oscillator starts
var context = new AudioContext();
var gain = context.createGain();
gain.connect(context.destination);
var osc = context.createOscillator();
osc.frequency.value = 300;
osc.connect(gain);
osc.start(context.currentTime + 1);
gain.gain.setValueAtTime(0, context.currentTime);
The gain.gain.value will stay to 1.
Set gain.gain.value without setValueAtTime
What is strange is that this behaviour is not seen if we set the gain directly
var context = new AudioContext();
var gain = context.createGain();
gain.connect(context.destination);
var osc = context.createOscillator();
osc.frequency.value = 300;
osc.connect(gain);
osc.start(context.currentTime + 1);
gain.gain.value = 0;
The gain value will always stay to 0.
If you're using Chrome, then this is probably a bug in Chrome. Chrome actually returns the computed value in the getter, but if a node doesn't have an input but is still connected to the destination, the AudioParam automations aren't run. They should be, and the values can be inspected with the .value getter.
AudioParam.value isn't a computed value - i.e., it won't tell you the current value of what the gain really IS, just what the AudioParam.value was last set to. (cf https://webaudio.github.io/web-audio-api/#widl-AudioParam-value). If you want to know what the current value of the AudioParam truly is, you'd need to route it to an audio node and collect the data (e.g. via a scriptprocessor). In your first example, I don't think gain.gain.value should go 0.
The actual value of an AudioParam at any given point in time can be affected not only by the scheduler and the .value, but also by nodes connect()ed to the AudioParam; it would be expensive to compute those values constantly and port them back to the AudioParam.

How can I sythesize a sine audio wave with JS?

I would like to try making a synth using JavaScript, but I can't find any basic examples on how to do so.
What I have figured out from research is that it appears to be possible and that you should use a Canvas Pixel Array rather than normal ECMA arrays
I've also found info in MDN Audio, and I have seen audio elements used for continuous playback by web radio players before, although I couldn't figure out how.
My goal is to make something which allows me to synthesize continuous sin waves and play them using my keyboard without using pre-made samples.
EDIT: One of the comments below pointed me in the right direction. I'm currently working on a solution, but if you would like to post one as well, feel free.
Here is a basic example from which anyone should be able to figure out how to play sine waves with their keyboard:
<script type="text/javascript">
//WARNING: VERY LOUD. TURN DOWN YOUR SPEAKERS BEFORE TESTING
// create web audio api context
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
// create Oscillator node
var oscillator = audioCtx.createOscillator();
oscillator.type = 'sine';
oscillator.frequency.value = 750; // value in hertz
oscillator.connect(audioCtx.destination);
oscillator.start();
//uncomment for fun
// setInterval(changeFreq, 100);
//choose a random interval from a list of consonant ratios
var intervals = [1.0, 0.5, 0.3333, 1.5, 1.3333, 2.0, 2.25]
function changeFreq() {
var intervalIndex = ~~(Math.random() * intervals.length);
var noteFreq = oscillator.frequency.value * intervals[intervalIndex];
//because this is random, make an effort to keep it in comfortable frequency range.
if(noteFreq > 1600)
noteFreq *= 0.5;
else if(noteFreq < 250)
noteFreq *= 2;
oscillator.frequency.value = noteFreq;
}
</script>
<body>
<button onclick="changeFreq()">Change Places!</button>
</body>

HTML5 Canvas game loop delta time calculations

I'm new to game development. Currently I'm doing a game for js13kgames contest, so the game should be small and that's why I don't use any of modern popular frameworks.
While developing my infinite game loop I found several articles and pieces of advice to implement it. Right now it looks like this:
self.gameLoop = function () {
self.dt = 0;
var now;
var lastTime = timestamp();
var fpsmeter = new FPSMeter({decimals: 0, graph: true, theme: 'dark', left: '5px'});
function frame () {
fpsmeter.tickStart();
now = window.performance.now();
// first variant - delta is increasing..
self.dt = self.dt + Math.min(1, (now-lastTime)/1000);
// second variant - delta is stable..
self.dt = (now - lastTime)/16;
self.dt = (self.dt > 10) ? 10 : self.dt;
self.clearRect();
self.createWeapons();
self.createTargets();
self.update('weapons');
self.render('weapons');
self.update('targets');
self.render('targets');
self.ticks++;
lastTime = now;
fpsmeter.tick();
requestAnimationFrame(frame);
}
requestAnimationFrame(frame);
};
So the problem is in self.dt I've eventually found out that first variant is not suitable for my game because it increases forever and the speed of weapons is increasing with it as well (e.g. this.position.x += (Math.cos(this.angle) * this.speed) * self.dt;..
Second variant looks more suitable, but does it correspond to this kind of loop (http://codeincomplete.com/posts/2013/12/4/javascript_game_foundations_the_game_loop/)?
Here' an implementation of an HTML5 rendering system using a fixed time step with a variable rendering time:
http://jsbin.com/ditad/10/edit?js,output
It's based on this article:
http://gameprogrammingpatterns.com/game-loop.html
Here is the game loop:
//Set the frame rate
var fps = 60,
//Get the start time
start = Date.now(),
//Set the frame duration in milliseconds
frameDuration = 1000 / fps,
//Initialize the lag offset
lag = 0;
//Start the game loop
gameLoop();
function gameLoop() {
requestAnimationFrame(gameLoop, canvas);
//Calcuate the time that has elapsed since the last frame
var current = Date.now(),
elapsed = current - start;
start = current;
//Add the elapsed time to the lag counter
lag += elapsed;
//Update the frame if the lag counter is greater than or
//equal to the frame duration
while (lag >= frameDuration){
//Update the logic
update();
//Reduce the lag counter by the frame duration
lag -= frameDuration;
}
//Calculate the lag offset and use it to render the sprites
var lagOffset = lag / frameDuration;
render(lagOffset);
}
The render function calls a render method on each sprite, with a reference to the lagOffset
function render(lagOffset) {
ctx.clearRect(0, 0, canvas.width, canvas.height);
sprites.forEach(function(sprite){
ctx.save();
//Call the sprite's `render` method and feed it the
//canvas context and lagOffset
sprite.render(ctx, lagOffset);
ctx.restore();
});
}
Here's the sprite's render method that uses the lag offset to interpolate the sprite's render position on the canvas.
o.render = function(ctx, lagOffset) {
//Use the `lagOffset` and previous x/y positions to
//calculate the render positions
o.renderX = (o.x - o.oldX) * lagOffset + o.oldX;
o.renderY = (o.y - o.oldY) * lagOffset + o.oldY;
//Render the sprite
ctx.strokeStyle = o.strokeStyle;
ctx.lineWidth = o.lineWidth;
ctx.fillStyle = o.fillStyle;
ctx.translate(
o.renderX + (o.width / 2),
o.renderY + (o.height / 2)
);
ctx.beginPath();
ctx.rect(-o.width / 2, -o.height / 2, o.width, o.height);
ctx.stroke();
ctx.fill();
//Capture the sprite's current positions to use as
//the previous position on the next frame
o.oldX = o.x;
o.oldY = o.y;
};
The important part is this bit of code that uses the lagOffset and the difference in the sprite's rendered position between frames to figure out its new current canvas position:
o.renderX = (o.x - o.oldX) * lagOffset + o.oldX;
o.renderY = (o.y - o.oldY) * lagOffset + o.oldY;
Notice that the oldX and oldY values are being re-calculated each frame at the end of the method, so that they can be used in the next frame to help figure out the difference.
o.oldX = o.x;
o.oldY = o.y;
I'm actually not sure if this interpolation is completely correct or if this is best way to do it. If anyone out there reading this knows that it's wrong, please let us know :)
The modern version of requestAnimationFrame now sends in a timestamp that you can use to calculate elapsed time. When your desired time interval has elapsed you can do your update, create and render tasks.
Here's example code:
var lastTime;
var requiredElapsed = 1000 / 10; // desired interval is 10fps
requestAnimationFrame(loop);
function loop(now) {
requestAnimationFrame(loop);
if (!lastTime) { lastTime = now; }
var elapsed = now - lastTime;
if (elapsed > requiredElapsed) {
// do stuff
lastTime = now;
}
}
This isn't really an answer to your question, and without knowing more about the particular game I can't say for sure if it will help you, but do you really need to know dt (and FPS)?
In my limited forays into JS game development I've found that often you don't really need to to calculate any kind of dt as you can usually come up with a sensible default value based on your expected frame rate, and make anything time-based (such as weapon reloading) simply work based on the number of ticks (i.e. a bow might take 60 ticks to reload (~1 second # ~60FPS)).
I usually use window.setTimeout() rather than window.requestAnimationFrame(), which I've found generally provides a more stable frame rate which will allow you to define a sensible default to use in place of dt. On the down-side the game will be more of a resource hog and less performant on slower machines (or if the user has a lot of other things running), but depending on your use case those may not be real concerns.
Now this is purely anecdotal advice so you should take it with a pinch of salt, but it has served me pretty well in the past. It all depends on whether you mind the game running more slowly on older/less powerful machines, and how efficient your game loop is. If it's something simple that doesn't need to display real times you might be able to do away with dt completely.
At some point you will want to think about decoupling your physics from your rendering. Otherwise your players could have inconsistent physics. For example, someone with a beefy machine getting 300fps will have very sped up physics compared to someone chugging along at 30fps. This could manifest in the first player cruising around in a mario-like scrolling game at super speed and the other player crawling at half speed (if you did all your testing at 60fps). A way to fix that is to introduce delta time steps. The idea is that you find the time between each frame and use that as part of your physics calculations. It keeps the gameplay consistent regardless of frame rate. Here is a good article to get you started: http://gafferongames.com/game-physics/fix-your-timestep/
requestAnimationFrame will not fix this inconsistency, but it is still a good thing to use sometimes as it has battery saving advantages. Here is a source for more info http://www.chandlerprall.com/2012/06/requestanimationframe-is-not-your-logics-friend/
I did not check the logic of the math in your code .. however here what works for me:
GameBox = function()
{
this.lastFrameTime = Date.now();
this.currentFrameTime = Date.now();
this.timeElapsed = 0;
this.updateInterval = 2000; //in ms
}
GameBox.prototype.gameLoop = function()
{
window.requestAnimationFrame(this.gameLoop.bind(this));
this.lastFrameTime = this.currentFrameTime;
this.currentFrameTime = Date.now();
this.timeElapsed += this.currentFrameTime - this.lastFrameTime ;
if(this.timeElapsed >= this.updateInterval)
{
this.timeElapsed = 0;
this.update(); //modify data which is used to render
}
this.render();
}
This implementation is idenpendant from the CPU-speed(ticks). Hope you can make use of it!
A great solution to your game engine would be to think in objects and entities. You can think of everything in your world as objects and entities. Then you want to make a game object manager that will have a list of all your game objects. Then you want to make a common communication method in the engine so game objects can make event triggers. The entities in your game for example a player would not need to inherent from anything to get the ability to render to the screen or have collision detection. You would simple make common methods in the entity that the game engine is looking for. Then let the game engine handle the entity as it would like. Entities in your game can be created or destroyed at anytime in the game so you should not hard-code any entities at all in the game loop.
You will want other objects in your game engine to respond to event triggers that the engine has received. This can be done using methods in the entity that the game engine will check to see if the method is available and if it is would pass the events to the entity. Do not hard code any of your game logic into the engine it messes up portability and limits your ability to expand on the game later on.
The problem with your code is first your calling different objects render and updates not in the correct order. You need to call ALL your updates then call ALL your renders in that order. Another is your method of hard coding objects into the loop is going to give you a lot of problems, when you want one of the objects to no longer be in the game or if you want to add more objects into the game later on.
Your game objects will have an update() and a render() your game engine will look for that function in the object/entity and call it every frame. You can get very fancy and make the engine work in a way to check if the game object/entity has the functions prior to calling them. for example maybe you want an object that has an update() but never renders anything to the screen. You could make the game object functions optional by making the engine check prior to calling them. Its also good practice to have an init() function for all game objects. When the game engine starts up the scene and creates the objects it will start by calling the game objects init() when first creating the object then every frame calling update() that way you can have a function that you only run one time on creation and another that runs every frame.
delta time is not really needed as window.requestAnimationFrame(frame); will give you ~60fps. So if you're keeping track of the frame count you can tell how much time has passed. Different objects in your game can then, (based off of a set point in the game and what the frame count was) determine how long its been doing something based off its new frame count.
window.requestAnimationFrame = window.requestAnimationFrame || function(callback){window.setTimeout(callback,16)};
gameEngine = function () {
this.frameCount=0;
self=this;
this.update = function(){
//loop over your objects and run each objects update function
}
this.render = function(){
//loop over your objects and run each objects render function
}
this.frame = function() {
self.update();
self.render();
self.frameCount++;
window.requestAnimationFrame(frame);
}
this.frame();
};
I have created a full game engine located at https://github.com/Patrick-W-McMahon/Jinx-Engine/ if you review the code at https://github.com/Patrick-W-McMahon/Jinx-Engine/blob/master/JinxEngine.js you will see a fully functional game engine built 100% in javascript. It includes event handlers and permits action calls between objects that are passed into the engine using the event call stack. check out some of the examples https://github.com/Patrick-W-McMahon/Jinx-Engine/tree/master/examples where you will see how it works. The engine can run around 100,000 objects all being rendered and executed per frame at a rate of 60fps. This was tested on a core i5. different hardware may vary. mouse and keyboard events are built into the engine. objects passed into the engine just need to listen for the event passed by the engine. Scene management and multi scene support is currently being built in for more complex games. The engine also supports high pixel density screens.
Reviewing my source code should get you on the track for building a more fully functional game engine.
I would also like to point out that you should have requestAnimationFrame() called when you're ready to repaint and not prior (aka at the end of the game loop). One good example why you should not call requestAnimationFrame() at the beginning of the loop is if you're using a canvas buffer. If you call requestAnimationFrame() at the beginning, then begin to draw to the canvas buffer you can end up having it draw half of the new frame with the other half being the old frame. This will happen on every frame depending on the time it takes to finish the buffer in relation to the repaint cycle (60fps). But at the same time you would end up overlapping each frame so the buffer will get more messed up as it loops over its self. This is why you should only call requestAnimationFrame() when the buffer is fully ready to draw to the canvas. by having the requestAnimationFrame() at the end you can have it skip a repaint if the buffer is not ready to draw and so every repaint is drawn as it is expected. The position of requestAnimationFrame() in the game loop has a big difference.

Categories