How to attach sound effects to an AudioBuffer - javascript

I'm trying to add the following sound effects to some audio files, then grab their audio buffers and convert to .mp3 format
Fade-out the first track
Fade in the following tracks
A background track (and make it background by giving it a small gain node)
Another track that will serve as the more audible of both merged tracks
Fade out the previous one and fade in the first track again
I observed the effects that are returned by the AudioParam class as well as those from the GainNode interface are attached to the context's destination instead of to the buffer itself. Is there a technique to espouse the AudioParam instance values (or the gain property) to the buffers so when I merge them into one ultimate buffer, they can still retain those effects? Or do those effects only have meaning to the destination (meaning I must connect on the sourceNode) and output them via OfflineContexts/startRendering? I tried this method previously and was told on my immediate last thread that I only needed one BaseAudioContext and it didn't have to be an OfflineContext. I think to have various effects on various files, I need several contexts, thus I'm stuck in the dilemma of various AudioParams and GainNodes but directly implicitly invoking them by calling start will inadvertently lose their potency.
The following snippets demonstrate the effects I'm referring to, while the full code can be found at https://jsfiddle.net/ka830Lqq/3/
var beginNodeGain = overallContext.createGain(); // Create a gain node
beginNodeGain.gain.setValueAtTime(1.0, buffer.duration - 3); // make the volume high 3 secs before the end
beginNodeGain.gain.exponentialRampToValueAtTime(0.01, buffer.duration); // reduce the volume to the minimum for the duration of expRampTime - setValTime i.e 3
// connect the AudioBufferSourceNode to the gainNode and the gainNode to the destination
begin.connect(beginNodeGain);
Another snippet goes thus
function handleBg (bgBuff) {
var bgContext = new OfflineAudioContext(bgBuff.numberOfChannels, finalBuff[0].length, bgBuff.sampleRate), // using a new context here so we can utilize its individual gains
bgAudBuff = bgContext.createBuffer(bgBuff.numberOfChannels, finalBuff[0].length, bgBuff.sampleRate),
bgGainNode = bgContext.createGain(),
smoothTrans = new Float32Array(3);
smoothTrans[0] = overallContext.currentTime; // should be 0, to usher it in but is actually between 5 and 6
smoothTrans[1] = 1.0;
smoothTrans[2] = 0.4; // make it flow in the background
bgGainNode.gain.setValueAtTime(0, 0); //currentTime here is 6.something-something
bgGainNode.gain.setValueCurveAtTime(smoothTrans, 0, finalBuff.pop().duration); // start at `param 2` and last for `param 3` seconds. bgBuff.duration
for (var channel = 0; channel < bgBuff.numberOfChannels; channel++) {
for (var j = 0; j < finalBuff[0].length; j++) {
var data = bgBuff.getChannelData(channel),
loopBuff = bgAudBuff.getChannelData(channel),
oldBuff = data[j] != void(0) ? data[j] : data[j - data.length];
loopBuff[j] = oldBuff;
}
}
// instead of piping them to the output speakers, merge them
mixArr.push(bgAudBuff);
gottenBgBuff.then(handleBody());
}

Related

How to "animate" changes in an ASCII art grid, one node at a time, without freezing the browser?

I have an ASCII art "pathfinding visualizer" which I am modeling off of a popular one seen here. The ASCII art displays a n by m size board with n*m number of nodes on it.
My current goal is to slowly change the appearance of the text on the user-facing board, character by character, until the "animation" is finished. I intend to animate both the "scanning" of the nodes by the pathfinding algorithm and the shortest path from the start node to the end node. The animation, which is just changing text in a series of divs, should take a few seconds. I also plan to add a CSS animation with color or something.
Basically the user ends up seeing something like this, where * is the start node, x is the end node, and + indicates the path:
....
..*.
..+.
.++.
.x..
.... (. represents an empty space)
After doing some research on both setTimeout, promises and other options, I can tell you:
JavaScript really isn't designed to allow someone to delay code execution in the browser.
I've found lots of ways to freeze the browser. I also tried to set a series of promises set to resolve after setTimeout(resolve, milliseconds) occurs, where the milliseconds steadily increases (see below code). My expectation was that numOfAnimationsForPath number of promises would be set and trigger a change in the appearance of the board when each one resolved (forming the appearance of a path). But, they all seem to resolve instantly (?) as I see the path show as soon as I click the "animate" button
const shortestPathAndScanningOrder = dijkstras(grid);
promisesRendering(1000, shortestPathAndScanningOrder[0].path, shortestPathAndScanningOrder[1])
function promisesRendering(animationDelay, algoPath, scanTargets) {
const numOfAnimationsForScanning = scanTargets.length;
const numOfAnimationsForPath = algoPath.length;
for (let i = 1; i < numOfAnimationsForPath - 1; i++) {
const xCoordinate = algoPath[i][0];
const yCoordinate = algoPath[i][1];
renderAfterDelay(animationDelay * i).then(renderNode(xCoordinate, yCoordinate, "path"))
}
}
function renderAfterDelay(milliseconds) {
return new Promise(resolve => setTimeout(resolve, milliseconds))
}
function renderNode(x, y, type) {
if (type === "scan") {
const targetDiv = getLocationByCoordinates(x, y);
targetDiv.innerHTML = VISITED_NODE;
} else if (type === "path") {
const targetDiv = getLocationByCoordinates(x, y);
targetDiv.innerHTML = SHORTEST_PATH_NODE;
} else {
throw "passed incorrect parameter to 'type' argument"
}
}
In my other attempt, I tried to generate pathLength number of setTimeouts as in:
const shortestPathAndScanningOrder = dijkstras(grid);
renderByTimer(10000, shortestPathAndScanningOrder[0].path, shortestPathAndScanningOrder[1])
function renderByTimer(animationDelay, algoPath, scanTargets) {
const numOfAnimations = algoPath.length;
for (let i = 1; i < numOfAnimations - 1; i++) {
const xCoordinate = algoPath[i][0];
const yCoordinate = algoPath[i][1];
setTimeout(i * animationDelay, updateCoordinatesWithTrailMarker(xCoordinate, yCoordinate))
}
}
...but this also resulted in the path being "animated" instantly instead of over a few seconds as I want it to be.
I believe what I want is possible because the Pathfinding Visualizer linked at the start of the post animates its board slowly, but I cannot figure out how to do it with text.
So basically:
If anyone knows how I can convince my browser to send an increasing delay value a series of function executions, I'm all ears...
And if you think it can't be done, I'd like to know that too in the comments, just so I know I have to choose an alternative to changing the text slowly.
edit: a friend tells me setTimeout should be able to do it... I'll update this w/ a solution if I figure it out
Edit2: Here is the modified version of #torbinsky's code that ended up doing the job for me...
function renderByTimer(algoPath, scanTargets) {
const numOfAnimations = algoPath.length - 1; // - 1 because we don't wanna animate the TARGET_NODE at the end
let frameNum = 1;
// Renders the current frame and schedules the next frame
// This repeats until we have exhausted all frames
function renderIn() {
if (frameNum >= numOfAnimations) {
// end recursion
console.log("Done!")
return
}
// Immediately render the current frame
const xCoordinate = algoPath[frameNum][0];
const yCoordinate = algoPath[frameNum][1];
frameNum = frameNum + 1;
updateCoordinatesWithTrailMarker(xCoordinate, yCoordinate);
// Schedule the next frame for rendering
setTimeout(function () {
renderIn(1000)
}, 1000);
}
// Render first frame
renderIn()
}
Thanks #torbinsky!
This should absolutely be doable using setTimeout. Probably the issue is that you are immediately registering 10,000 timeouts. The longer your path, the worse this approach becomes.
So instead of scheduling all updates right away, you should use a recursive algorithm where each "frame" schedules the timeout for the next frame. Something like this:
const shortestPathAndScanningOrder = dijkstras(grid);
renderByTimer(10000, shortestPathAndScanningOrder[0].path, shortestPathAndScanningOrder[1])
function renderByTimer(animationDelay, algoPath, scanTargets) {
const numOfAnimations = algoPath.length;
// Renders the current frame and schedules the next frame
// This repeats until we have exhausted all frames
function renderIn(msToNextFrame, frameNum){
if(frameNum >= numOfAnimations){
// end recursion
return
}
// Immediately render the current frame
const xCoordinate = algoPath[frameNum][0];
const yCoordinate = algoPath[frameNum][1];
updateCoordinatesWithTrailMarker(xCoordinate, yCoordinate);
// Schedule the next frame for rendering
setTimeout(msToNextFrame, function(){
renderIn(msToNextFrame, frameNum + 1)
});
}
// Render first frame
renderIn(1000, 1)
}
Note: I wrote this code in the StackOverflow code snipppet. So I was not able to test it as I did not have the rest of your code to fully run this. Treat it more like pseudo-code even though it probably works ;)
In any case, the approach I've used is to only have 1 timeout scheduled at any given time. This way you don't overload the browser with 1000's of timeouts scheduled at the same time. This approach will support very long paths!
This is a general animation technique and not particularly unique to ASCII art except that old-school ASCII art is rendered one (slow) character at a time instead of one fast pixel frame at a time. (I'm old enough to remember watching ASCII "movies" stream across hard-wired Gandalf modems at 9600bps to a z19 terminal from the local mainframe...everything old is new again! :) ).
Anyhow, queueing up a bunch of setTimeouts is not really the best plan IMO. What you should be doing, instead, is queueing up the next event with either window.requestAnimationFrame or setTimeout. I recommend rAF because it doesn't trigger when the browser tab is not showing.
Next, once you're in the event, you look at the clock delta (use a snapshot of performance.now()) to figure what should have been drawn between "now" and the last time your function ran. Then update the display, and trigger the next event.
This will yield a smooth animation that will play nicely with your system resources.

How (if at all) can AnalyserNode be used to analyze a whole audio file, and get the data necessary for visualization?

In particular, I am interested in the AnalyserNode.getFloatTimeDomainData() method.
Can I use this method, to create a waveform, that represents the entire audio file?
Or is AnalyserNode only for audio streams?
The MDN documentation says:
The AnalyserNode interface represents a node able to provide real-time frequency and time-domain analysis information. It is an AudioNode that passes the audio stream unchanged from the input to the output, but allows you to take the generated data, process it, and create audio visualizations.
https://developer.mozilla.org/en-US/docs/Web/API/AnalyserNode
All examples seem to use AudioBuffer.getChannelData() for the purpose of getting data for the visualization of an entire audio file. As in this article:
https://css-tricks.com/making-an-audio-waveform-visualizer-with-vanilla-javascript/
From the article:
const filterData = audioBuffer => {
const rawData = audioBuffer.getChannelData(0); // We only need to work with one channel of data
const samples = 70; // Number of samples we want to have in our final data set
const blockSize = Math.floor(rawData.length / samples); // the number of samples in each subdivision
const filteredData = [];
for (let i = 0; i < samples; i++) {
let blockStart = blockSize * i; // the location of the first sample in the block
let sum = 0;
for (let j = 0; j < blockSize; j++) {
sum = sum + Math.abs(rawData[blockStart + j]) // find the sum of all the samples in the block
}
filteredData.push(sum / blockSize); // divide the sum by the block size to get the average
}
return filteredData;
}
You say, in the subject, that you have the whole file. If that's the case, you can decode the file (decodeAudioData) and you'll get an AudioBuffer with all of the samples that you can then use for visualization.
If you cannot decode the entire file into memory, you'll have to use a different method, and AnalyserNode will probably not work because it's asynchronous so you may not get all of the samples or you may get duplicates. If this is not acceptable, you'll have to use, perhaps, a MediaStreamAudioSourceNode connected to a ScriptProcessorNode or AudioWorkletNode to get the time-domain data that you want.

How to distort just a specific frequency range using javascript web audio?

I have a small web app that models deafness based on a person's individual audiogram (http://howdeaf.com). Essentially, it goes:
source -> 2-ch splitter -> 6 inline frequency separators -> merge node -> destination
-> 6 inline frequency separators ->
The frequency separators are created using this code:
for (let i = 0; i < bandSplit.length; i++) {
for (let j in sides) {
var side = sides[j];
let filterNode = context.createBiquadFilter();
filterNode.frequency.value = bandSplit[i];
if (i === 0)
filterNode.type = 'lowshelf';
else if (i === bandSplit.length - 1)
filterNode.type = 'highshelf';
else
filterNode.type = 'peaking';
filterNode.gain.value = 0.0;
this.eqNodes[side].push(filterNode);
if (i > 0)
this.eqNodes[side][i - 1].connect(this.eqNodes[side][i]);
}
}
I adjust the gain on each of the 12 frequency separator nodes, and it's a pretty neat simulation.
I'd like to add distortion (using createWaveShaper()), individually, using different curve values, to each of those 12 biquad filter nodes, but I've not had any success distorting just the individual frequency. Any distortion node I create affects the entire audio output for that channel.
Is there any way to apply distortion to those frequency-selection nodes on an individual basis?
All but two of your filters are peaking filters with a gain of 0. That basically lets everything through. So whatever wave shaping you apply gets applied to the whole frequency band. (See, for example, http://googlechrome.github.io/web-audio-samples/samples/audio/mag-phase.html to get graphs of the responses.)
Consider using different kinds of filters such as bandpass filters. This might not be appropriate for you application, but at least these filters don't let all frequencies through.

Dividing one audio signal by another one

Short version
I need to divide an audio signal by another one (amplitude-wise). How could I accomplish this in the Web Audio API, without using ScriptProcessorNode? (with ScriptProcessorNode the task is trivial, but it is completely unusable for production due to the inherent performance issues)
Long version
Consider two audio sources, two OscillatorNodes for example, oscA and oscB:
var oscA = audioCtx.createOscillator();
var oscB = audioCtx.createOscillator();
Now, consider that these oscillators are LFOs, both with low (i.e. <20Hz) frequencies, and that their signals are used to control a single destination AudioParam, for example, the gain of a GainNode. Through various routing setups, we can define mathematical operations between these two signals.
Addition
If oscA and oscB are both directly connected to the destination AudioParam, their outputs are added together:
var dest = audioCtx.createGain();
oscA.connect(dest.gain);
oscB.connect(dest.gain);
Subtraction
If the output of oscB is first routed through another GainNode with a gain of -1, which is then connected to the destination AudioParam, then the output of oscB is effectively subtracted from that of oscA, because we are effectively doing an oscA + -oscB op. Using this trick we can subtract one signal from another one:
var dest = audioCtx.createGain();
var inverter = audioCtx.createGain();
oscA.connect(dest.gain);
oscB.connect(inverter);
inverter.gain = -1;
inverter.connect(dest.gain);
Multiplication
Similarly, if the output of oscA is connected to another GainNode, and the output of oscB is connected to the gain AudioParam of that GainNode, then oscB is multiplying the signal of oscA:
var dest = audioCtx.createGain();
var multiplier = audioCtx.createGain();
oscA.connect(multiplier);
oscB.connect(multiplier.gain);
multiplier.connect(dest.gain);
Division (?)
Now, I want the output of oscB to divide the output of oscA. How do I do this, without using ScriptProcessorNode?
Edit
My earlier, absolutely ridiculous attempts at solving this problem were:
Using a PannerNode to control the positionZ param, which did yield a result that decreased as signal B (oscB) increased, but it was completely off (e.g. it yielded 12/1 = 8.5 and 12/2 = 4.2) -- now this value can be compensated for by using a GainNode with its gain set to 12 / 8.48528099060058593750 (approximation), but it only supports values >=1
Using an AnalyserNode to rapidly sample the audio signal and then use that value (LOL)
Edit 2
The reason why the ScriptProcessorNode is essentially useless for applications more complex than a tech demo is that:
it executes audio processing on the main thread (!), and heavy UI work will introduce audio glitches
a single, dead simple ScriptProcessorNode will take 5% CPU power on a modern device, as it performs processing with JavaScript and requires data to be passed between the audio thread (or rendering thread) and the main thread (or UI thread)
It should be noted, that ScriptProcessorNode is deprecated.
If you need A/B, therefore you need 1/B, inverted signal. You can use WaveShaperNode to make the inversion. This node needs an array of corresponding values. Inversion means that -1 becomes -1, -0.5 becomes -2 etc.
In addition, make sure that you are aware of division by zero. You have to handle it. In the following code I just take the next value after zero and double it.
function makeInverseCurve( ) {
var n_samples = 44100,
curve = new Float32Array(n_samples),
x;
for (var i = 0 ; i < n_samples; i++ ) {
x = i * 2 / n_samples - 1;
// if x = 0, let reverse value be twice the previous
curve[i] = (i * 2 == n_samples) ? n_samples : 1 / x;
}
return curve;
};
Working fiddle is here. If you remove .connect(distortion) out of the audio chain, you see a simple sine wave. Visualization code got from sonoport.

HTML5 Canvas game loop delta time calculations

I'm new to game development. Currently I'm doing a game for js13kgames contest, so the game should be small and that's why I don't use any of modern popular frameworks.
While developing my infinite game loop I found several articles and pieces of advice to implement it. Right now it looks like this:
self.gameLoop = function () {
self.dt = 0;
var now;
var lastTime = timestamp();
var fpsmeter = new FPSMeter({decimals: 0, graph: true, theme: 'dark', left: '5px'});
function frame () {
fpsmeter.tickStart();
now = window.performance.now();
// first variant - delta is increasing..
self.dt = self.dt + Math.min(1, (now-lastTime)/1000);
// second variant - delta is stable..
self.dt = (now - lastTime)/16;
self.dt = (self.dt > 10) ? 10 : self.dt;
self.clearRect();
self.createWeapons();
self.createTargets();
self.update('weapons');
self.render('weapons');
self.update('targets');
self.render('targets');
self.ticks++;
lastTime = now;
fpsmeter.tick();
requestAnimationFrame(frame);
}
requestAnimationFrame(frame);
};
So the problem is in self.dt I've eventually found out that first variant is not suitable for my game because it increases forever and the speed of weapons is increasing with it as well (e.g. this.position.x += (Math.cos(this.angle) * this.speed) * self.dt;..
Second variant looks more suitable, but does it correspond to this kind of loop (http://codeincomplete.com/posts/2013/12/4/javascript_game_foundations_the_game_loop/)?
Here' an implementation of an HTML5 rendering system using a fixed time step with a variable rendering time:
http://jsbin.com/ditad/10/edit?js,output
It's based on this article:
http://gameprogrammingpatterns.com/game-loop.html
Here is the game loop:
//Set the frame rate
var fps = 60,
//Get the start time
start = Date.now(),
//Set the frame duration in milliseconds
frameDuration = 1000 / fps,
//Initialize the lag offset
lag = 0;
//Start the game loop
gameLoop();
function gameLoop() {
requestAnimationFrame(gameLoop, canvas);
//Calcuate the time that has elapsed since the last frame
var current = Date.now(),
elapsed = current - start;
start = current;
//Add the elapsed time to the lag counter
lag += elapsed;
//Update the frame if the lag counter is greater than or
//equal to the frame duration
while (lag >= frameDuration){
//Update the logic
update();
//Reduce the lag counter by the frame duration
lag -= frameDuration;
}
//Calculate the lag offset and use it to render the sprites
var lagOffset = lag / frameDuration;
render(lagOffset);
}
The render function calls a render method on each sprite, with a reference to the lagOffset
function render(lagOffset) {
ctx.clearRect(0, 0, canvas.width, canvas.height);
sprites.forEach(function(sprite){
ctx.save();
//Call the sprite's `render` method and feed it the
//canvas context and lagOffset
sprite.render(ctx, lagOffset);
ctx.restore();
});
}
Here's the sprite's render method that uses the lag offset to interpolate the sprite's render position on the canvas.
o.render = function(ctx, lagOffset) {
//Use the `lagOffset` and previous x/y positions to
//calculate the render positions
o.renderX = (o.x - o.oldX) * lagOffset + o.oldX;
o.renderY = (o.y - o.oldY) * lagOffset + o.oldY;
//Render the sprite
ctx.strokeStyle = o.strokeStyle;
ctx.lineWidth = o.lineWidth;
ctx.fillStyle = o.fillStyle;
ctx.translate(
o.renderX + (o.width / 2),
o.renderY + (o.height / 2)
);
ctx.beginPath();
ctx.rect(-o.width / 2, -o.height / 2, o.width, o.height);
ctx.stroke();
ctx.fill();
//Capture the sprite's current positions to use as
//the previous position on the next frame
o.oldX = o.x;
o.oldY = o.y;
};
The important part is this bit of code that uses the lagOffset and the difference in the sprite's rendered position between frames to figure out its new current canvas position:
o.renderX = (o.x - o.oldX) * lagOffset + o.oldX;
o.renderY = (o.y - o.oldY) * lagOffset + o.oldY;
Notice that the oldX and oldY values are being re-calculated each frame at the end of the method, so that they can be used in the next frame to help figure out the difference.
o.oldX = o.x;
o.oldY = o.y;
I'm actually not sure if this interpolation is completely correct or if this is best way to do it. If anyone out there reading this knows that it's wrong, please let us know :)
The modern version of requestAnimationFrame now sends in a timestamp that you can use to calculate elapsed time. When your desired time interval has elapsed you can do your update, create and render tasks.
Here's example code:
var lastTime;
var requiredElapsed = 1000 / 10; // desired interval is 10fps
requestAnimationFrame(loop);
function loop(now) {
requestAnimationFrame(loop);
if (!lastTime) { lastTime = now; }
var elapsed = now - lastTime;
if (elapsed > requiredElapsed) {
// do stuff
lastTime = now;
}
}
This isn't really an answer to your question, and without knowing more about the particular game I can't say for sure if it will help you, but do you really need to know dt (and FPS)?
In my limited forays into JS game development I've found that often you don't really need to to calculate any kind of dt as you can usually come up with a sensible default value based on your expected frame rate, and make anything time-based (such as weapon reloading) simply work based on the number of ticks (i.e. a bow might take 60 ticks to reload (~1 second # ~60FPS)).
I usually use window.setTimeout() rather than window.requestAnimationFrame(), which I've found generally provides a more stable frame rate which will allow you to define a sensible default to use in place of dt. On the down-side the game will be more of a resource hog and less performant on slower machines (or if the user has a lot of other things running), but depending on your use case those may not be real concerns.
Now this is purely anecdotal advice so you should take it with a pinch of salt, but it has served me pretty well in the past. It all depends on whether you mind the game running more slowly on older/less powerful machines, and how efficient your game loop is. If it's something simple that doesn't need to display real times you might be able to do away with dt completely.
At some point you will want to think about decoupling your physics from your rendering. Otherwise your players could have inconsistent physics. For example, someone with a beefy machine getting 300fps will have very sped up physics compared to someone chugging along at 30fps. This could manifest in the first player cruising around in a mario-like scrolling game at super speed and the other player crawling at half speed (if you did all your testing at 60fps). A way to fix that is to introduce delta time steps. The idea is that you find the time between each frame and use that as part of your physics calculations. It keeps the gameplay consistent regardless of frame rate. Here is a good article to get you started: http://gafferongames.com/game-physics/fix-your-timestep/
requestAnimationFrame will not fix this inconsistency, but it is still a good thing to use sometimes as it has battery saving advantages. Here is a source for more info http://www.chandlerprall.com/2012/06/requestanimationframe-is-not-your-logics-friend/
I did not check the logic of the math in your code .. however here what works for me:
GameBox = function()
{
this.lastFrameTime = Date.now();
this.currentFrameTime = Date.now();
this.timeElapsed = 0;
this.updateInterval = 2000; //in ms
}
GameBox.prototype.gameLoop = function()
{
window.requestAnimationFrame(this.gameLoop.bind(this));
this.lastFrameTime = this.currentFrameTime;
this.currentFrameTime = Date.now();
this.timeElapsed += this.currentFrameTime - this.lastFrameTime ;
if(this.timeElapsed >= this.updateInterval)
{
this.timeElapsed = 0;
this.update(); //modify data which is used to render
}
this.render();
}
This implementation is idenpendant from the CPU-speed(ticks). Hope you can make use of it!
A great solution to your game engine would be to think in objects and entities. You can think of everything in your world as objects and entities. Then you want to make a game object manager that will have a list of all your game objects. Then you want to make a common communication method in the engine so game objects can make event triggers. The entities in your game for example a player would not need to inherent from anything to get the ability to render to the screen or have collision detection. You would simple make common methods in the entity that the game engine is looking for. Then let the game engine handle the entity as it would like. Entities in your game can be created or destroyed at anytime in the game so you should not hard-code any entities at all in the game loop.
You will want other objects in your game engine to respond to event triggers that the engine has received. This can be done using methods in the entity that the game engine will check to see if the method is available and if it is would pass the events to the entity. Do not hard code any of your game logic into the engine it messes up portability and limits your ability to expand on the game later on.
The problem with your code is first your calling different objects render and updates not in the correct order. You need to call ALL your updates then call ALL your renders in that order. Another is your method of hard coding objects into the loop is going to give you a lot of problems, when you want one of the objects to no longer be in the game or if you want to add more objects into the game later on.
Your game objects will have an update() and a render() your game engine will look for that function in the object/entity and call it every frame. You can get very fancy and make the engine work in a way to check if the game object/entity has the functions prior to calling them. for example maybe you want an object that has an update() but never renders anything to the screen. You could make the game object functions optional by making the engine check prior to calling them. Its also good practice to have an init() function for all game objects. When the game engine starts up the scene and creates the objects it will start by calling the game objects init() when first creating the object then every frame calling update() that way you can have a function that you only run one time on creation and another that runs every frame.
delta time is not really needed as window.requestAnimationFrame(frame); will give you ~60fps. So if you're keeping track of the frame count you can tell how much time has passed. Different objects in your game can then, (based off of a set point in the game and what the frame count was) determine how long its been doing something based off its new frame count.
window.requestAnimationFrame = window.requestAnimationFrame || function(callback){window.setTimeout(callback,16)};
gameEngine = function () {
this.frameCount=0;
self=this;
this.update = function(){
//loop over your objects and run each objects update function
}
this.render = function(){
//loop over your objects and run each objects render function
}
this.frame = function() {
self.update();
self.render();
self.frameCount++;
window.requestAnimationFrame(frame);
}
this.frame();
};
I have created a full game engine located at https://github.com/Patrick-W-McMahon/Jinx-Engine/ if you review the code at https://github.com/Patrick-W-McMahon/Jinx-Engine/blob/master/JinxEngine.js you will see a fully functional game engine built 100% in javascript. It includes event handlers and permits action calls between objects that are passed into the engine using the event call stack. check out some of the examples https://github.com/Patrick-W-McMahon/Jinx-Engine/tree/master/examples where you will see how it works. The engine can run around 100,000 objects all being rendered and executed per frame at a rate of 60fps. This was tested on a core i5. different hardware may vary. mouse and keyboard events are built into the engine. objects passed into the engine just need to listen for the event passed by the engine. Scene management and multi scene support is currently being built in for more complex games. The engine also supports high pixel density screens.
Reviewing my source code should get you on the track for building a more fully functional game engine.
I would also like to point out that you should have requestAnimationFrame() called when you're ready to repaint and not prior (aka at the end of the game loop). One good example why you should not call requestAnimationFrame() at the beginning of the loop is if you're using a canvas buffer. If you call requestAnimationFrame() at the beginning, then begin to draw to the canvas buffer you can end up having it draw half of the new frame with the other half being the old frame. This will happen on every frame depending on the time it takes to finish the buffer in relation to the repaint cycle (60fps). But at the same time you would end up overlapping each frame so the buffer will get more messed up as it loops over its self. This is why you should only call requestAnimationFrame() when the buffer is fully ready to draw to the canvas. by having the requestAnimationFrame() at the end you can have it skip a repaint if the buffer is not ready to draw and so every repaint is drawn as it is expected. The position of requestAnimationFrame() in the game loop has a big difference.

Categories