I'm building a wheel of fortune in html+js that spins rather quickly. Every time a new color flies by the mark, the wheel should play a click-sound. At top speed this sounds almost like a machine gun, so a new file starts playing before the old one is finished basically. The file itself is always the same: click.wav
It works fine in Chrome, only in chrome. Firefox has a weird bug, where it only plays the sound, if there is any other audio source active, such as a youtube video playing in a different tab. Edge and Safari kinda safe up the clicks to the end and then play them all simultaniously. It's a mess...
I use the method described here which uses cloning an <audio> tag
I guess this is where the problem is:
var sound = new Audio("sounds/click.wav");
sound.preload = 'auto';
sound.load();
function playsound(){
var click=sound.cloneNode();
click.volume=1;
click.play();
}
Here is a simplified version of my spinning function that just calls the playsound() function several times per second:
function rotateWheel(){
angle = angle + acceleration
while (angle >= 360) {
angle = angle - 360
}
var wheel = document.getElementById("wheel")
wheel.style.transform = "rotate("+angle +"deg)"
// play the click when a new segment rotates by
if(Math.floor(angle/21) != previousSegment){
playsound()
previousSegment = Math.floor(angle/21)
}
You used an answer from here this methods cause at some point to crash the browser process because you either create a memory issue or you fill up the DOM with elements the browser has to handle - so you should re-think your approach AND as you found out it will not work for heavy use in most browsers like safari or FireFox
Looking deeper into the <audio> tag specification, it becomes clear that there are many things that simply can't be done with it, which isn't surprising, since it was designed for media playback.
One of the limitations includes -> No fine-grained timing of sound.
So you have to find another method for what you want we use Web Audio API designed for online video games.
Web Audio API
An AudioContext is for managing and playing all sounds. To produce a sound using the Web Audio API, create one or more sound sources and connect them to the sound destination provided by the AudioContext instance (usually the speaker).
The AudioBuffer
With the Web Audio API, audio files can be played only after they’ve been loaded into a buffer. Loading sounds takes time, so assets that are used in the animation/game should be loaded on page load, at the start of the game or level, or incrementally while the player is playing.
The basic steps
We use an XMLHttpRequest to load data into a buffer from an audio file.
Next, we make an asynchronous callback and send the actual request to load.
Once a sound has been buffered and decoded, it can be triggered instantly.
Each time it is triggered, a different instance of the buffered sound is created.
A key feature of sound effects in games is that there can be many of them simultaneously.
So to take your example of the "machine gun": Imagine you're in the middle of a gunfight a shooting machine gun.
The machine gun fires many times per second, causing tens of sound effects to be played at the same time. This is where Web Audio API really shines.
A simple example for your application:
/* global AudioContext:true,
*/
var clickingBuffer = null;
// Fix up prefixing
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
function loadClickSound(url) {
var request = new XMLHttpRequest();
request.open('GET', url, true);
request.responseType = 'arraybuffer';
// Decode asynchronously
request.onload = function() {
context.decodeAudioData(request.response, function(buffer) {
if (!buffer) {
console.log('Error decoding file data: ' + url);
return;
}
clickingBuffer = buffer;
});
request.onerror = function() {
console.log('BufferLoader: XHR error');
};
request.send();
};
}
function playSound(buffer, time, volume) {
var source = context.createBufferSource(); // creates a sound source
source.buffer = buffer; // tell the source which sound to play
source.connect(context.destination); // connect the source to the context's destination (the speakers)
var gainNode = context.createGain(); // Create a gain node
source.connect(gainNode); // Connect the source to the gain node
gainNode.connect(context.destination); // Connect the gain node to the destination
gainNode.gain.value = volume; // Set the volume
source.start(time); // play the source at the deisred time 0=now
}
// You call with in your document ready
loadClickSound('sounds/click.wav');
//and this plays the sound
playSound(clickingBuffer, 0, 1);
Now you can play around with different timings and volume variations for example by intoducing a random factor
If you need a more complex solution with different clicking sounds (stored in a buffer array) and volume/ distance variations this would be a longer piece of code.
Related
I'm developing a game using javascript and other web technologies. In it, there's a game mode that is basically a tower defense, in which multiple objects may need to make use of the same audio file(.ogg) at the same time. Loading a file and creating a new webaudio for each one of those lags it too much, even if I attempt to stream it instead of a simple sync read, and if I create and save a webaudio in a variable to use multiple times, each time its playing and there is a new request to play said audio, the one that was playing will stop to allow for the new one to play(so, with enough of those, nothing plays at all).
With those issues, I decided to make copies of said webaudio object each time it was gonna be played, but its not only slow to do so, but also creates a minor memory leak(at least the way I did it).
How can I properly cache a webaudio for re-use? Consider that I'm pretty sure I'll need a new one each time because each audio has a position, and thus each of them will play differently, based on player position in relation to object that is playing the audio
You tagged your question with web-audio-api, but from the body of this question, it seems you are using an HTMLMediaElement <audio> instead of the Web Audio API.
So I'll invite you to do the transition to that Web Audio API.
From there you'll be able to decode once your audio file, keep only once the decoded data as an AudioBuffer, and create many readers that will all hook to that one and only AudioBuffer, without eating any more memory.
const btn = document.querySelector("button")
const context = new AudioContext();
// a GainNode to control the output volume of our audio
const volumeNode = context.createGain();
volumeNode.gain.value = 0.5; // from 0 to 1
volumeNode.connect(context.destination);
fetch("https://dl.dropboxusercontent.com/s/agepbh2agnduknz/camera.mp3")
// get the resource as an ArrayBuffer
.then((resp) => resp.arrayBuffer())
// decode the Audio data from this resource
.then((buffer) => context.decodeAudioData(buffer))
// now we have our AudioBuffer object, ready to be played
.then((audioBuffer) => {
btn.onclick = (evt) => {
// allowing an AudioContext to make noise
// must be required from an user-gesture
if (context.status === "suspended") {
context.resume();
}
// a very light player object
const source = context.createBufferSource();
// a simple pointer to the big AudioBuffer (no copy)
source.buffer = audioBuffer;
// connect to our volume node, itself connected to audio output
source.connect(volumeNode);
// start playing now
source.start(0);
};
// now you can spam the button!
btn.disabled = false;
})
.catch(console.error);
<button disabled>play</button>
I'm analysing an audio file in order to use the channelData to drive another part of my webapp (basically draw graphics based on the audio file). The callback function for the playback looks something like this:
successCallback(mediaStream) {
var audioContext = new (window.AudioContext ||
window.webkitAudioContext)();
source = audioContext.createMediaStreamSource(mediaStream);
node = audioContext.createScriptProcessor(256, 1, 1);
node.onaudioprocess = function(data) {
var monoChannel = data.inputBuffer.getChannelData(0);
..
};
Somehow I thought if I run the above code with the same file it would yield the same results all the time. But that's not the case. The same audio file would trigger the onaudioprocess function sometimes 70, sometimes 72 times for instance, yielding different data all the time.
Is there a way to get consistent data of that sort in the browser?
EDIT: I'm getting the audio from a recording function on the same page. When the recording is finished the resulting file gets set as the src of an <audio> element. recorder is my MediaRecorder.
recorder.addEventListener("dataavailable", function(e) {
fileurl = URL.createObjectURL(e.data);
document.querySelector("#localaudio").src = fileurl;
..
To answer your original question: getChannelData is deterministic, i.e. it will yield the same Float32Array from the same AudioBuffer for the same channel (unless you happen to transfer the backing ArrayBuffer to another thread, in which case it will return an empty Float32Array with a detached backing buffer from then on).
I presume the problem you are encountering here is a threading issue (my guess is that the MediaStream is already playing before you start processing the audio stream from it), but it's hard to tell exactly without debugging your complete app (there are at least 3 threads at work here: an audio processing thread for the MediaStream, an audio processing thread for the AudioContext you are using, and the main thread that runs your code).
Is there a way to get consistent data of that sort in the browser?
Yes.
Instead of processing through a real-time audio stream for real-time analysis, you could just take the recording result (e.data), read it as an ArrayBuffer, and then decode it as an AudioBuffer, something like:
recorder.addEventListener("dataavailable", function (e) {
let reader = new FileReader();
reader.onload = function (e) {
audioContext.decodeAudioData(e.target.result).then(function (audioBuffer) {
var monoChannel = audioBuffer.getChannelData(0);
// monoChannel contains the entire first channel of your recording as a Float32Array
// ...
});
};
reader.readAsArrayBuffer(e.data);
}
Note: this code would become a lot simpler with async functions and Promises, but it should give a general idea of how to read the entire completed recording.
Also note: the ScriptProcessorNode is deprecated due to performance issues inherent in cross-thread data copy, especially involving the JS main thread. The preferred alternative is the much more advanced AudioWorklet, but this is a fairly new way to do things on the web and requires a solid understanding of worklets in general.
I have working on streaming live video using WebRTC based on RTCConnection with library called simple-peer, but I have faced with some lag between live stream video (with MediaRecorder) and that was played on using MediaSource
Here is recorder:
var mediaRecorder = new MediaRecorder(stream, options);
mediaRecorder.ondataavailable = handleDataAvailable;
function handleDataAvailable(event) {
if (connected && event.data.size > 0) {
peer.send(event.data);
}
}
...
peer.on('connect', () => {
// wait for 'connect' event before using the data channel
mediaRecorder.start(1);
});
Here is source that is played:
var mediaSource = new MediaSource();
var sourceBuffer;
mediaSource.addEventListener('sourceopen', args => {
sourceBuffer = mediaSource.addSourceBuffer(mimeCodec);
});
...
peer.on('data', data => {
// got a data channel message
sourceBuffer.appendBuffer(data);
});
I open two tabs and connect to myself and I see delay in playing video ...
Seems like I configured badly MediaRecorder or MediaSource
Any help will be appreciated ;)
You've combined two completely unrelated techniques for streaming the video, and are getting the worst tradeoffs of both. :-)
WebRTC has media stream handling built into it. If you expect realtime video, the WebRTC stack is what you want to use. It handles codec negotiation, auto-scales bandwidth, frame size, frame rate, and encoding parameters to match network conditions, and will outright drop chunks of time to keep playback as realtime as possible.
On the other hand, if retaining quality is more desirable than being realtime, MediaRecorder is what you would use. It makes no adjustments based on network conditions because it is unaware of those conditions. MediaRecorder doesn't know or care where you put the data after it gives you the buffers.
If you try to play back video as it's being recorded, will inevitably lag further and further behind because there is no built-in catch-up method. The only thing that can happen is a buffer underrun, where the playback side waits until there is enough data to begin playback again. Even if it becomes minutes behind, it isn't going to automatically skip ahead.
The solution is to use the right tool. It sounds like from your question that you want realtime video. Therefore, you need to use WebRTC. Fortunately simple-peer makes this... simple.
On the recording side:
const peer = new Peer({
initiator: true,
stream
});
Then on the playback side:
peer.on('stream', (stream) => {
videoEl.srcObject = stream;
});
Much simpler. The WebRTC stack handles everything for you.
I have an array of Blobs (binary data, really -- I can express it however is most efficient. I'm using Blobs for now but maybe a Uint8Array or something would be better). Each Blob contains 1 second of audio/video data. Every second a new Blob is generated and appended to my array. So the code roughly looks like so:
var arrayOfBlobs = [];
setInterval(function() {
arrayOfBlobs.append(nextChunk());
}, 1000);
My goal is to stream this audio/video data to an HTML5 element. I know that a Blob URL can be generated and played like so:
var src = URL.createObjectURL(arrayOfBlobs[0]);
var video = document.getElementsByTagName("video")[0];
video.src = src;
Of course this only plays the first 1 second of video. I also assume I can trivially concatenate all of the Blobs currently in my array somehow to play more than one second:
// Something like this (untested)
var concatenatedBlob = new Blob(arrayOfBlobs);
var src = ...
However this will still eventually run out of data. As Blobs are immutable, I don't know how to keep appending data as it's received.
I'm certain this should be possible because YouTube and many other video streaming services utilize Blob URLs for video playback. How do they do it?
Solution
After some significant Googling I managed to find the missing piece to the puzzle: MediaSource
Effectively the process goes like this:
Create a MediaSource
Create an object URL from the MediaSource
Set the video's src to the object URL
On the sourceopen event, create a SourceBuffer
Use SourceBuffer.appendBuffer() to add all of your chunks to the video
This way you can keep adding new bits of video without changing the object URL.
Caveats
The SourceBuffer object is very picky about codecs. These have to be declared, and must be exact, or it won't work
You can only append one blob of video data to the SourceBuffer at a time, and you can't append a second blob until the first one has finished (asynchronously) processing
If you append too much data to the SourceBuffer without calling .remove() then you'll eventually run out of RAM and the video will stop playing. I hit this limit around 1 hour on my laptop
Example Code
Depending on your setup, some of this may be unnecessary (particularly the part where we build a queue of video data before we have a SourceBuffer then slowly append our queue using updateend). If you are able to wait until the SourceBuffer has been created to start grabbing video data, your code will look much nicer.
<html>
<head>
</head>
<body>
<video id="video"></video>
<script>
// As before, I'm regularly grabbing blobs of video data
// The implementation of "nextChunk" could be various things:
// - reading from a MediaRecorder
// - reading from an XMLHttpRequest
// - reading from a local webcam
// - generating the files on the fly in JavaScript
// - etc
var arrayOfBlobs = [];
setInterval(function() {
arrayOfBlobs.append(nextChunk());
// NEW: Try to flush our queue of video data to the video element
appendToSourceBuffer();
}, 1000);
// 1. Create a `MediaSource`
var mediaSource = new MediaSource();
// 2. Create an object URL from the `MediaSource`
var url = URL.createObjectURL(mediaSource);
// 3. Set the video's `src` to the object URL
var video = document.getElementById("video");
video.src = url;
// 4. On the `sourceopen` event, create a `SourceBuffer`
var sourceBuffer = null;
mediaSource.addEventListener("sourceopen", function()
{
// NOTE: Browsers are VERY picky about the codec being EXACTLY
// right here. Make sure you know which codecs you're using!
sourceBuffer = mediaSource.addSourceBuffer("video/webm; codecs=\"opus,vp8\"");
// If we requested any video data prior to setting up the SourceBuffer,
// we want to make sure we only append one blob at a time
sourceBuffer.addEventListener("updateend", appendToSourceBuffer);
});
// 5. Use `SourceBuffer.appendBuffer()` to add all of your chunks to the video
function appendToSourceBuffer()
{
if (
mediaSource.readyState === "open" &&
sourceBuffer &&
sourceBuffer.updating === false
)
{
sourceBuffer.appendBuffer(arrayOfBlobs.shift());
}
// Limit the total buffer size to 20 minutes
// This way we don't run out of RAM
if (
video.buffered.length &&
video.buffered.end(0) - video.buffered.start(0) > 1200
)
{
sourceBuffer.remove(0, video.buffered.end(0) - 1200)
}
}
</script>
</body>
</html>
As an added bonus this automatically gives you DVR functionality for live streams, because you're retaining 20 minutes of video data in your buffer (you can seek by simply using video.currentTime = ...)
Adding to the previous answer...
make sure to add sourceBuffer.mode = 'sequence' in the MediaSource.onopen event handler to ensure the data is appended based on the order it is received. The default value is segments, which buffers until the next 'expected' timeframe is loaded.
Additionally, make sure that you are not sending any packets with a data.size === 0, and make sure that there is 'stack' by clearing the stack on the broadcasting side, unless you are wanting to record it as an entire video, in which case just make sure the size of the broadcast video is small enough, and that your internet speed is fast. The smaller and lower the resolution the more likely you can keep a realtime connection with a client, ie a video call.
For iOS the broadcast needs to made from a iOS/macOS application, and be in mp4 format. The video chunk gets saved to the app's cache and then removed once it is sent to the server. A client can connect to the stream using either a web browser or app across nearly any device.
I have developed a simple custom chromecast receiver for a game.
In it I play short sounds from the receiver javascript using:
this.bounceSound = new Audio("paddle.ogg");
to create the audio object when the game is loaded, and then using:
this.bounceSound.play();
to play the sound when needed in the game.
This works fine in chrome on my laptop, but when running the receiver in my chromecast some sounds don't play, others are delayed.
Could this be a problem with my choice of sound format (.ogg) for the audio files?
If not, what else could be the problem.
Are their any best practices on the details of the sounds files (frequency, bit depth, etc?).
Thanks
Just for the record to avoid future confusion of other developers trying to load and play back multiple short sounds at the same time:
On Chromecast, the HTML video and audio tags can only support a
single active media element at a time.
(Source: https://plus.google.com/+LeonNicholls/posts/3Fq5jcbxipJ - make sure to read the rest, it contains also important information about limitations)
Only one audio element will be loaded, others get error code 4 (that was at least the case during my debugging sessions). The correct way of loading and playing back several short sounds is using the Web Audio API as explained by Leon Nicholls in his Google+ post I linked to above.
Simple Web Audio API Wrapper
I whipped up a crude replacement for the HTMLAudioElement in JavaScript that is based on the Web Audio API:
function WebAudio(src) {
if(src) this.load(src);
}
WebAudio.prototype.audioContext = new AudioContext;
WebAudio.prototype.load = function(src) {
if(src) this.src = src;
console.log('Loading audio ' + this.src);
var self = this;
var request = new XMLHttpRequest;
request.open("GET", this.src, true);
request.responseType = "arraybuffer";
request.onload = function() {
self.audioContext.decodeAudioData(request.response, function(buffer) {
if (!buffer) {
if(self.onerror) self.onerror();
return;
}
self.buffer = buffer;
if(self.onload)
self.onload(self);
}, function(error) {
self.onerror(error);
});
};
request.send();
};
WebAudio.prototype.play = function() {
var source = this.audioContext.createBufferSource();
source.buffer = this.buffer;
source.connect(this.audioContext.destination);
source.start(0);
};
It can be used as follows:
var audio1 = new WebAudio('sounds/sound1.ogg');
audio1.onload = function() {
audio1.play();
}
var audio2 = new WebAudio('sounds/sound2.ogg');
audio2.onload = function() {
audio2.play();
}
You should download the sounds up front before you start the game. Also be aware that these sounds will then be stored in memory and Chromecast has very limited memory for that. Make sure these sounds are small and will all fit into memory.