I am writing a front end to a game engine in Javascript. The engine runs on the server and sends pictures and sounds to the web browser through 'SignalR'. I am using the React framework.
As the game runs the server sends small sound samples in WAVE format, passed into this component through the AudioPlayerProps.
I am having two main issues with the sound. The first is that the sound sounds 'disjointed'.
And the second is that after a time the sound just stops playing. I can see sound being queued in the audio queue, but the 'playNextAudioTrack' method is not being called. There are no errors in the console to explain this.
If this is not the best way to provide sound for a game front end, please let me know.
Also if you want to see any more code please let me know. This is a huge multi tiered project so I am only showing what I think you need to see.
Right now I am testing in Chrome. At this stage I need to turn on the DEV tools to get past the 'user didn't interact with page so you can't play any sound issue'. I will sort that issue out in due course.
import * as React from "react";
import { useEffect, useState } from "react";
export interface AudioPlayerProps {
data: string;
}
export const AudioPlayer = function (props: AudioPlayerProps): JSX.Element {
const [audioQueue, setAudioQueue] = useState<string[]>([])
useEffect(
() => {
if (props.data != undefined) {
audioQueue.push(props.data);
}
}, [props.data]);
const playNextAudioTrack = () => {
if (audioQueue.length > 0) {
const audioBase64 = audioQueue.pop();
const newAudio = new Audio(`data:audio/wav;base64,${audioBase64}`)
newAudio.play().then(playNextAudioTrack).catch(
(error) => {
setTimeout(playNextAudioTrack, 10);
}
)
}
else {
setTimeout(playNextAudioTrack, 10);
}
}
useEffect(playNextAudioTrack, []);
return null;
}
I solved my own problem. Here is the typescript class I wrote to handle chunked audio in JavaScript.
I am not a JavaScript expert and therefore there may be faults.
EDIT: After running it in 15 minute lots several times, it failed a couple of times at about the 10 minute mark. Still needs some work.
// mostly from https://gist.github.com/revolunet/e620e2c532b7144c62768a36b8b96da2
// Modified to play chunked audio for games
import { setInterval } from "timers";
//
const MaxScheduled = 10;
const MaxQueueLength = 2000;
const MinScheduledToStopDraining = 5;
export class WebAudioStreamer {
constructor() {
this.isDraining = false;
this.isWorking = false;
this.audioStack = [];
this.nextTime = 0;
this.numberScheduled = 0;
setInterval(() => {
if (this.audioStack.length && !this.isWorking) {
this.scheduleBuffers(this);
}
}, 0);
}
context: AudioContext;
audioStack: AudioBuffer[];
nextTime: number;
numberScheduled: number;
isDraining: boolean;
isWorking: boolean;
pushOntoAudioStack(encodedBytes: number[]) {
if (this.context == undefined) {
this.context = new (window.AudioContext)();
}
const encodedBuffer = new Uint8ClampedArray(encodedBytes).buffer;
const streamer: WebAudioStreamer = this;
if (this.audioStack.length > MaxQueueLength) {
this.audioStack = [];
}
streamer.context.decodeAudioData(encodedBuffer, function (decodedBuffer) {
streamer.audioStack.push(decodedBuffer);
}
);
}
scheduleBuffers(streamer: WebAudioStreamer) {
streamer.isWorking = true;
if (streamer.context == undefined) {
streamer.context = new (window.AudioContext)();
}
if (streamer.isDraining && streamer.numberScheduled <= MinScheduledToStopDraining) {
streamer.isDraining = false;
}
while (streamer.audioStack.length && !streamer.isDraining) {
var buffer = streamer.audioStack.shift();
var source = streamer.context.createBufferSource();
source.buffer = buffer;
source.connect(streamer.context.destination);
if (streamer.nextTime == 0)
streamer.nextTime = streamer.context.currentTime + 0.01; /// add 50ms latency to work well across systems - tune this if you like
source.start(streamer.nextTime);
streamer.nextTime += source.buffer.duration; // Make the next buffer wait the length of the last buffer before being played
streamer.numberScheduled++;
source.onended = function () {
streamer.numberScheduled--;
}
if (streamer.numberScheduled == MaxScheduled) {
streamer.isDraining = true;
}
};
streamer.isWorking = false;
}
}
Related
I want to be able to change the value of a global variable when it is being used by a function as a parameter.
My javascript:
function playAudio(audioFile, canPlay) {
if (canPlay < 2 && audioFile.paused) {
canPlay = canPlay + 1;
audioFile.play();
} else {
if (canPlay >= 2) {
alert("This audio has already been played twice.");
} else {
alert("Please wait for the audio to finish playing.");
};
};
};
const btnPitch01 = document.getElementById("btnPitch01");
const audioFilePitch01 = new Audio("../aud/Pitch01.wav");
var canPlayPitch01 = 0;
btnPitch01.addEventListener("click", function() {
playAudio(audioFilePitch01, canPlayPitch01);
});
My HTML:
<body>
<button id="btnPitch01">Play Pitch01</button>
<button id="btnPitch02">Play Pitch02</button>
<script src="js/js-master.js"></script>
</body>
My scenario:
I'm building a Musical Aptitude Test for personal use that won't be hosted online. There are going to be hundreds of buttons each corresponding to their own audio files. Each audio file may only be played twice and no more than that. Buttons may not be pressed while their corresponding audio files are already playing.
All of that was working completely fine, until I optimised the function to use parameters. I know this would be good to avoid copy-pasting the same function hundreds of times, but it has broken the solution I used to prevent the audio from being played more than once. The "canPlayPitch01" variable, when it is being used as a parameter, no longer gets incremented, and therefore makes the [if (canPlay < 2)] useless.
How would I go about solving this? Even if it is bad coding practise, I would prefer to keep using the method I'm currently using, because I think it is a very logical one.
I'm a beginner and know very little, so please forgive any mistakes or poor coding practises. I welcome corrections and tips.
Thank you very much!
It's not possible, since variables are passed by value, not by reference. You should return the new value, and the caller should assign it to the variable.
function playAudio(audioFile, canPlay) {
if (canPlay < 2 && audioFile.paused) {
canPlay = canPlay + 1;
audioFile.play();
} else {
if (canPlay >= 2) {
alert("This audio has already been played twice.");
} else {
alert("Please wait for the audio to finish playing.");
};
};
return canPlay;
};
const btnPitch01 = document.getElementById("btnPitch01");
const audioFilePitch01 = new Audio("../aud/Pitch01.wav");
var canPlayPitch01 = 0;
btnPitch01.addEventListener("click", function() {
canPlayPitch01 = playAudio(audioFilePitch01, canPlayPitch01);
});
A little improvement of the data will fix the stated problem and probably have quite a few side benefits elsewhere in the code.
Your data looks like this:
const btnPitch01 = document.getElementById("btnPitch01");
const audioFilePitch01 = new Audio("../aud/Pitch01.wav");
var canPlayPitch01 = 0;
// and, judging by the naming used, there's probably more like this:
const btnPitch02 = document.getElementById("btnPitch02");
const audioFilePitch02 = new Audio("../aud/Pitch02.wav");
var canPlayPitch02 = 0;
// and so on
Now consider that global data looking like this:
const model = {
btnPitch01: {
canPlay: 0,
el: document.getElementById("btnPitch01"),
audioFile: new Audio("../aud/Pitch01.wav")
},
btnPitch02: { /* and so on */ }
}
Your event listener(s) can say:
btnPitch01.addEventListener("click", function(event) {
// notice how (if this is all that's done here) we can shrink this even further later
playAudio(event);
});
And your playAudio function can have a side-effect on the data:
function playAudio(event) {
// here's how we get from the button to the model item
const item = model[event.target.id];
if (item.canPlay < 2 && item.audioFile.paused) {
item.canPlay++;
item.audioFile.play();
} else {
if (item.canPlay >= 2) {
alert("This audio has already been played twice.");
} else {
alert("Please wait for the audio to finish playing.");
};
};
};
Side note: the model can probably be built in code...
// you can automate this even more using String padStart() on 1,2,3...
const baseIds = [ '01', '02', ... ];
const model = Object.fromEntries(
baseIds.map(baseId => {
const id = `btnPitch${baseId}`;
const value = {
canPlay: 0,
el: document.getElementById(id),
audioFile: new Audio(`../aud/Pitch${baseId}.wav`)
}
return [id, value];
})
);
// you can build the event listeners in a loop, too
// (or in the loop above)
Object.values(model).forEach(value => {
value.el.addEventListener("click", playAudio)
})
below is an example of the function.
btnPitch01.addEventListener("click", function() {
if ( this.dataset.numberOfPlays >= this.dataset.allowedNumberOfPlays ) return;
playAudio(audioFilePitch01, canPlayPitch01);
this.dataset.numberOfPlays++;
});
you would want to select all of your buttons and assign this to them after your html is loaded.
https://developer.mozilla.org/en-US/docs/Web/API/Document/getElementsByClassName
const listOfButtons = document.getElementsByClassName('pitchButton');
listOfButtons.forEach( item => {
item.addEventListener("click", () => {
if ( this.dataset.numberOfPlays >= this.dataset.allowedNumberOfPlays ) return;
playAudio("audioFilePitch" + this.id);
this.dataset.numberOfPlays++;
});
I am trying to trigger an alarm after a certain time has ended. I am using Howler.js to play the alarm, but for some reason, it is displaying this error: howler.js:2500 The AudioContext was not allowed to start. It must be resumed (or created) after a user gesture on the page. I switched to Howler because a similar error was coming when I tried to use Audio() Web API in JavaScript.
Here is my code:
const { Howl, Howler } = require("howler");
let i;
let timer;
var alarm = new Howl({
src: ["alarm.mp3"],
});
let playTimer = true;
const timerDisplay = document.getElementById("timerDisplay");
document.addEventListener("DOMContentLoaded", () => {
i = 5;
timer = setInterval(() => {
if (playTimer == true) {
if (i != 0) {
timerDisplay.textContent = i;
i--;
}
if (i == 0) {
alarm.play();
clearInterval(timer);
}
}
}, 1000);
});
I don't know if this is helpful or not, but just to let you know, I am using Parcel as my bundler. Thank you in advance for any answer.
As #Kokodoko said in the comments of my question, the audio had to be triggered by some form of user interaction. So, I added a button in my HTML and replaced the
document.addEventListener("DOMContentLoaded", () => {})
with
startTimerBtn.addEventListener("click", () => {})
So this is the final code:
const { Howl, Howler } = require("howler");
let i;
let timer;
var alarm = new Howl({
src: ["alarm.mp3"],
});
let playTimer = true;
const timerDisplay = document.getElementById("timerDisplay");
const startTimerBtn = document.getElementById("startTimer");
startTimerBtn.addEventListener("click", () => {
i = 5;
timer = setInterval(() => {
if (playTimer == true) {
if (i != 0) {
timerDisplay.textContent = i;
i--;
}
if (i == 0) {
alarm.play();
clearInterval(timer);
}
}
}, 1000);
});
Although this wasn't the solution I intended to get, it suits my needs well.
Also, one small other thing for others using Parcel.js like me. Just in case you don't know, you have to put your audio file in Parcel's dist directory so that it can play. For some reason, it wasn't bundling automatically.
I'm working on a board game with an AI mode in it. And now I'm trying to call the AI's move but it is called asynchronously. I don't know what I can do. I've tried a lot of stuff; calling setTimeout synchronously, calling it inside the onClick... Do you have any solutions?
Here's the code:
render() {
// datas that will be rendered regularly come here
const current = this.state.history[this.state.stepNumber]
const elemsPlayer = this.checkElementsAround(this.checkEmptySpaces())
if (this.state.ourPlayer !== (this.state.blackIsNext) ? 'black' : 'white') {
if (this.state.gameMode === "AI-Easy") {
setTimeout(()=>{
this.easyAI()
}, 1500)
} else if (this.state.gameMode === "AI-Hard") {
setTimeout(()=>{
this.minimaxAI()
}, 1500)
}
}
// Return the game scene here
return (
<div id="master-container">
<div id="gameboard-container">
<GameBoard squares={current} onClick={(row,col) => {
// Where player's move will be called
}}/>
</div>
</div>
)
}
Here's the easyAI function (minimax one is empty for now):
easyAI(){
const elemsAI = this.checkElementsAround(this.checkEmptySpaces())
const validsAI = []
for (let elAI=0;elAI<elemsAI.length;elAI++) {
const turningAI = this.checkTurningStones(elemsAI[elAI].directions, this.state.blackIsNext)
//console.log(turningAI)
if (turningAI.length !== 0) {
validsAI.push(elemsAI[elAI])
}
}
// ValidsAI returns an object {emptySquares: coordinates of the empty square, turningStones: coordinates of the stones that will be turned upside down if emptySquares is played}
//Check if AI has a move to make
if (validsAI.length !== 0) {
//Perform the move
var elementToTurnAI = null
try{
elementToTurnAI = validsAI[Math.floor(Math.random()*validsAI.length)]
} catch {
console.log(null)
}
//console.log(elementToTurnAI)
const turningAI = this.checkTurningStones(elementToTurnAI.directions, this.state.blackIsNext)
const selfAI = elementToTurnAI.coordinates
//console.log(selfAI)
//console.log(turningAI)
turningAI.unshift([selfAI[0],selfAI[1]])
// Turn the stones
const upcomingAI = this.handleMove(turningAI)
// Update the state
this.setState({
history: upcomingAI,
stepNumber: upcomingAI.length - 1
},() => {
this.setWinnerAndTurn(this.state.history)
//console.log(this.state.history)
})
}
}
If you have to take a look at something else in order to find a solution, please do let me know.
Thanks in advance :)
I am in the process of replacing RecordRTC with the built in MediaRecorder for recording audio in Chrome. The recorded audio is then played in the program with audio api. I am having trouble getting the audio.duration property to work. It says
If the video (audio) is streamed and has no predefined length, "Inf" (Infinity) is returned.
With RecordRTC, I had to use ffmpeg_asm.js to convert the audio from wav to ogg. My guess is somewhere in the process RecordRTC sets the predefined audio length. Is there any way to set the predefined length using MediaRecorder?
This is a chrome bug.
FF does expose the duration of the recorded media, and if you do set the currentTimeof the recorded media to more than its actual duration, then the property is available in chrome...
var recorder,
chunks = [],
ctx = new AudioContext(),
aud = document.getElementById('aud');
function exportAudio() {
var blob = new Blob(chunks);
aud.src = URL.createObjectURL(new Blob(chunks));
aud.onloadedmetadata = function() {
// it should already be available here
log.textContent = ' duration: ' + aud.duration;
// handle chrome's bug
if (aud.duration === Infinity) {
// set it to bigger than the actual duration
aud.currentTime = 1e101;
aud.ontimeupdate = function() {
this.ontimeupdate = () => {
return;
}
log.textContent += ' after workaround: ' + aud.duration;
aud.currentTime = 0;
}
}
}
}
function getData() {
var request = new XMLHttpRequest();
request.open('GET', 'https://upload.wikimedia.org/wikipedia/commons/4/4b/011229beowulf_grendel.ogg', true);
request.responseType = 'arraybuffer';
request.onload = decodeAudio;
request.send();
}
function decodeAudio(evt) {
var audioData = this.response;
ctx.decodeAudioData(audioData, startRecording);
}
function startRecording(buffer) {
var source = ctx.createBufferSource();
source.buffer = buffer;
var dest = ctx.createMediaStreamDestination();
source.connect(dest);
recorder = new MediaRecorder(dest.stream);
recorder.ondataavailable = saveChunks;
recorder.onstop = exportAudio;
source.start(0);
recorder.start();
log.innerHTML = 'recording...'
// record only 5 seconds
setTimeout(function() {
recorder.stop();
}, 5000);
}
function saveChunks(evt) {
if (evt.data.size > 0) {
chunks.push(evt.data);
}
}
// we need user-activation
document.getElementById('button').onclick = function(evt){
getData();
this.remove();
}
<button id="button">start</button>
<audio id="aud" controls></audio><span id="log"></span>
So the advice here would be to star the bug report so that chromium's team takes some time to fix it, even if this workaround can do the trick...
Thanks to #Kaiido for identifying bug and offering the working fix.
I prepared an npm package called get-blob-duration that you can install to get a nice Promise-wrapped function to do the dirty work.
Usage is as follows:
// Returns Promise<Number>
getBlobDuration(blob).then(function(duration) {
console.log(duration + ' seconds');
});
Or ECMAScript 6:
// yada yada async
const duration = await getBlobDuration(blob)
console.log(duration + ' seconds')
A bug in Chrome, detected in 2016, but still open today (March 2019), is the root cause behind this behavior. Under certain scenarios audioElement.duration will return Infinity.
Chrome Bug information here and here
The following code provides a workaround to avoid the bug.
Usage : Create your audioElement, and call this function a single time, providing a reference of your audioElement. When the returned promise resolves, the audioElement.duration property should contain the right value. ( It also fixes the same problem with videoElements )
/**
* calculateMediaDuration()
* Force media element duration calculation.
* Returns a promise, that resolves when duration is calculated
**/
function calculateMediaDuration(media){
return new Promise( (resolve,reject)=>{
media.onloadedmetadata = function(){
// set the mediaElement.currentTime to a high value beyond its real duration
media.currentTime = Number.MAX_SAFE_INTEGER;
// listen to time position change
media.ontimeupdate = function(){
media.ontimeupdate = function(){};
// setting player currentTime back to 0 can be buggy too, set it first to .1 sec
media.currentTime = 0.1;
media.currentTime = 0;
// media.duration should now have its correct value, return it...
resolve(media.duration);
}
}
});
}
// USAGE EXAMPLE :
calculateMediaDuration( yourAudioElement ).then( ()=>{
console.log( yourAudioElement.duration )
});
Thanks #colxi for the actual solution, I've added some validation steps (As the solution was working fine but had problems with long audio files).
It took me like 4 hours to get it to work with long audio files turns out validation was the fix
function fixInfinity(media) {
return new Promise((resolve, reject) => {
//Wait for media to load metadata
media.onloadedmetadata = () => {
//Changes the current time to update ontimeupdate
media.currentTime = Number.MAX_SAFE_INTEGER;
//Check if its infinite NaN or undefined
if (ifNull(media)) {
media.ontimeupdate = () => {
//If it is not null resolve the promise and send the duration
if (!ifNull(media)) {
//If it is not null resolve the promise and send the duration
resolve(media.duration);
}
//Check if its infinite NaN or undefined //The second ontime update is a fallback if the first one fails
media.ontimeupdate = () => {
if (!ifNull(media)) {
resolve(media.duration);
}
};
};
} else {
//If media duration was never infinity return it
resolve(media.duration);
}
};
});
}
//Check if null
function ifNull(media) {
if (media.duration === Infinity || media.duration === NaN || media.duration === undefined) {
return true;
} else {
return false;
}
}
//USAGE EXAMPLE
//Get audio player on html
const AudioPlayer = document.getElementById('audio');
const getInfinity = async () => {
//Await for promise
await fixInfinity(AudioPlayer).then(val => {
//Reset audio current time
AudioPlayer.currentTime = 0;
//Log duration
console.log(val)
})
}
I wrapped the webm-duration-fix package to solve the webm length problem, which can be used in nodejs and web browsers to support video files over 2GB with not too much memory usage.
Usage is as follows:
import fixWebmDuration from 'webm-duration-fix';
const mimeType = 'video/webm\;codecs=vp9';
const blobSlice: BlobPart[] = [];
mediaRecorder = new MediaRecorder(stream, {
mimeType
});
mediaRecorder.ondataavailable = (event: BlobEvent) => {
blobSlice.push(event.data);
}
mediaRecorder.onstop = async () => {
// fix blob, support fix webm file larger than 2GB
const fixBlob = await fixWebmDuration(new Blob([...blobSlice], { type: mimeType }));
// to write locally, it is recommended to use fs.createWriteStream to reduce memory usage
const fileWriteStream = fs.createWriteStream(inputPath);
const blobReadstream = fixBlob.stream();
const blobReader = blobReadstream.getReader();
while (true) {
let { done, value } = await blobReader.read();
if (done) {
console.log('write done.');
fileWriteStream.close();
break;
}
fileWriteStream.write(value);
value = null;
}
blobSlice = [];
};
//If you want to modify the video file completely, you can use this package "webmFixDuration", Other methods are applied at the display level only on the video tag With this method, the complete video file is modified
webmFixDuration github example
mediaRecorder.onstop = async () => {
const duration = Date.now() - startTime;
const buggyBlob = new Blob(mediaParts, { type: 'video/webm' });
const fixedBlob = await webmFixDuration(buggyBlob, duration);
displayResult(fixedBlob);
};
I am using prosemirror to build a collaborative editor, where multiple people can edit one document. I wrote the following code, based on the example given here - http://prosemirror.net/docs/guides/collab/
Here is the code-
const { EditorState } = require('prosemirror-state');
const { EditorView } = require('prosemirror-view');
const { DOMParser } = require("prosemirror-model");
const {schema} = require("./schema");
var collab = require("prosemirror-collab");
function Authority(doc) {
this.doc = doc
this.steps = []
this.stepClientIDs = []
this.onNewSteps = []
}
Authority.prototype.receiveSteps = function(version, steps, clientID) {
if (version != this.steps.length) return
var self = this
// Apply and accumulate new steps
steps.forEach(function(step) {
self.doc = step.apply(self.doc).doc
self.steps.push(step)
self.stepClientIDs.push(clientID)
})
// Signal listeners
this.onNewSteps.forEach(function(f) { f() })
}
Authority.prototype.stepsSince = function(version) {
return {
steps: this.steps.slice(version),
clientIDs: this.stepClientIDs.slice(version)
}
}
var auth = new Authority('');
collabEditor(auth)
function collabEditor(authority) {
var view = new EditorView(document.querySelector("#editor"), {
state: EditorState.create({schema: schema, plugins: [collab.collab()]}),
dispatchTransaction: function(transaction) {
var newState = view.state.apply(transaction)
view.updateState(newState)
var sendable = collab.sendableSteps(newState)
if (sendable)
authority.receiveSteps(sendable.version, sendable.steps,
sendable.clientID)
}
})
authority.onNewSteps.push(function() {
var newData = authority.stepsSince(collab.getVersion(view.state))
view.dispatch(
collab.receiveTransaction(view.state, newData.steps, newData.clientIDs))
})
return view
}
When i run this code (after installing all the dependencies and setting up a simple server in nodejs) I am basically able to edit a text box but I am not able to open two tabs in chrome and see the collaboration happen. What am i doing wrong?
Will love some feedback.
This is the example code for a simple, single-page, no-external-communication setup. As such, no, it won't communicate to other tabs. For that, you'd have to move the authority somewhere else and set up pages to actually communicate with it over HTTP or websockets. (See for example this demo.)