Control over buffer of mp3 stream - javascript

Currently I'm some streams urls (like this example) which i'm playing normally as new Audio(). Like the code bellow:
const audio = new Audio();
audio.src = 'http://radio.talksport.com/stream?awparams=platform:ts-tunein;lang:en&aw_0_1st.playerid=RadioTime&aw_0_1st.skey=1572173094&aw_0_1st.platform=tunein';
audio.load();
audio.play();
But I'm struggling to add some custom buffer on the given stream, like set a buffer of 2 min. Is possible to do with streaming?

This is a live stream, so no, you cannot set a buffer of 2 minutes.
(Unless of course you want to have the user wait 2 minutes before playing anything back.)

Related

I have a audio in 128kbps that was read 16KB every second, but it seems that I am getting more than 1 seconds of the music each time. Why?

So I have a new stream to which I will push 2048 bytes of audio buffer every 128ms (i.e. 16KB every second) but I seem to have more than 1 second of data pushed to the stream. (I think so by finding ffmpeg is still streaming sound even tens of seconds after I stop pushing data in it)
When I changed it to 1024 bytes/128ms (8KB/s), the sound stop right after I stop pushing data.
Please correct me if I do anything wrong!
Some background story
I am using ffmpeg to stream to a rtp server. It is one-time used, so I can't stop ffmpeg and start again. I don't want to use the ZeroMQ way because of latency. The target I am trying to archive is to have the same readable stream to ffmpeg and change the audio content on the go by stop pushing chunks of previous audio file and switch to the new one.
If you know some other ways to archive the goal, I would be very pleased to know. Thank you in advance!
My current code
const stream = new Stream.Readable();
// ...
// (send stream to ffmpeg)
// ...
fileP = fs.createReadStream(input);
fileP.on('readable', async () => {
var chunk;
while (previousStream && (chunk = fileP?.read(16 * 128))) {
if (!previousStream) break;
stream.push(chunk);
await delay(125);
}
});

Better timing for audio file

So I created the audio:
const jumpy = new Audio();
jumpy.src = "./audio/jump2.wav";
and generated an event listener that triggers the audio:
const cvs = document.getElementById("ghost");
cvs.addEventListener("click", function(evt){
jumpy2.play()
});
the problem is the browser first waits for the audio to play in full (about 1000 ms) before it will play it again but I want the audio to reset every time I click.
How can I go for that?
const jumpy = new Audio();
jumpy.src = "./audio/jump2.wav";
const cvs = document.getElementById("ghost");
cvs.addEventListener("click", function(evt){
jumpy2.play()
});
For short sounds that you want to use multiple times like this, it is better to use the AudioBufferSourceNode in the Web Audio API.
For example:
const buffer = await audioContext.decodeAudioData(/* audio data */);
const bufferSourceNode = audioContext.createBufferSource();
bufferSourceNode.buffer = buffer;
bufferSourceNode.connect(audioContext.destination);
bufferSourceNode.start();
The buffer will be kept in memory, already decoded to PCM and ready to play. Then when you call .start(), it will play right away.
See also: https://stackoverflow.com/a/62355960/362536

How to cache a webaudio object properly?

I'm developing a game using javascript and other web technologies. In it, there's a game mode that is basically a tower defense, in which multiple objects may need to make use of the same audio file(.ogg) at the same time. Loading a file and creating a new webaudio for each one of those lags it too much, even if I attempt to stream it instead of a simple sync read, and if I create and save a webaudio in a variable to use multiple times, each time its playing and there is a new request to play said audio, the one that was playing will stop to allow for the new one to play(so, with enough of those, nothing plays at all).
With those issues, I decided to make copies of said webaudio object each time it was gonna be played, but its not only slow to do so, but also creates a minor memory leak(at least the way I did it).
How can I properly cache a webaudio for re-use? Consider that I'm pretty sure I'll need a new one each time because each audio has a position, and thus each of them will play differently, based on player position in relation to object that is playing the audio
You tagged your question with web-audio-api, but from the body of this question, it seems you are using an HTMLMediaElement <audio> instead of the Web Audio API.
So I'll invite you to do the transition to that Web Audio API.
From there you'll be able to decode once your audio file, keep only once the decoded data as an AudioBuffer, and create many readers that will all hook to that one and only AudioBuffer, without eating any more memory.
const btn = document.querySelector("button")
const context = new AudioContext();
// a GainNode to control the output volume of our audio
const volumeNode = context.createGain();
volumeNode.gain.value = 0.5; // from 0 to 1
volumeNode.connect(context.destination);
fetch("https://dl.dropboxusercontent.com/s/agepbh2agnduknz/camera.mp3")
// get the resource as an ArrayBuffer
.then((resp) => resp.arrayBuffer())
// decode the Audio data from this resource
.then((buffer) => context.decodeAudioData(buffer))
// now we have our AudioBuffer object, ready to be played
.then((audioBuffer) => {
btn.onclick = (evt) => {
// allowing an AudioContext to make noise
// must be required from an user-gesture
if (context.status === "suspended") {
context.resume();
}
// a very light player object
const source = context.createBufferSource();
// a simple pointer to the big AudioBuffer (no copy)
source.buffer = audioBuffer;
// connect to our volume node, itself connected to audio output
source.connect(volumeNode);
// start playing now
source.start(0);
};
// now you can spam the button!
btn.disabled = false;
})
.catch(console.error);
<button disabled>play</button>

HTML5 Video: Streaming Video with Blob URLs

I have an array of Blobs (binary data, really -- I can express it however is most efficient. I'm using Blobs for now but maybe a Uint8Array or something would be better). Each Blob contains 1 second of audio/video data. Every second a new Blob is generated and appended to my array. So the code roughly looks like so:
var arrayOfBlobs = [];
setInterval(function() {
arrayOfBlobs.append(nextChunk());
}, 1000);
My goal is to stream this audio/video data to an HTML5 element. I know that a Blob URL can be generated and played like so:
var src = URL.createObjectURL(arrayOfBlobs[0]);
var video = document.getElementsByTagName("video")[0];
video.src = src;
Of course this only plays the first 1 second of video. I also assume I can trivially concatenate all of the Blobs currently in my array somehow to play more than one second:
// Something like this (untested)
var concatenatedBlob = new Blob(arrayOfBlobs);
var src = ...
However this will still eventually run out of data. As Blobs are immutable, I don't know how to keep appending data as it's received.
I'm certain this should be possible because YouTube and many other video streaming services utilize Blob URLs for video playback. How do they do it?
Solution
After some significant Googling I managed to find the missing piece to the puzzle: MediaSource
Effectively the process goes like this:
Create a MediaSource
Create an object URL from the MediaSource
Set the video's src to the object URL
On the sourceopen event, create a SourceBuffer
Use SourceBuffer.appendBuffer() to add all of your chunks to the video
This way you can keep adding new bits of video without changing the object URL.
Caveats
The SourceBuffer object is very picky about codecs. These have to be declared, and must be exact, or it won't work
You can only append one blob of video data to the SourceBuffer at a time, and you can't append a second blob until the first one has finished (asynchronously) processing
If you append too much data to the SourceBuffer without calling .remove() then you'll eventually run out of RAM and the video will stop playing. I hit this limit around 1 hour on my laptop
Example Code
Depending on your setup, some of this may be unnecessary (particularly the part where we build a queue of video data before we have a SourceBuffer then slowly append our queue using updateend). If you are able to wait until the SourceBuffer has been created to start grabbing video data, your code will look much nicer.
<html>
<head>
</head>
<body>
<video id="video"></video>
<script>
// As before, I'm regularly grabbing blobs of video data
// The implementation of "nextChunk" could be various things:
// - reading from a MediaRecorder
// - reading from an XMLHttpRequest
// - reading from a local webcam
// - generating the files on the fly in JavaScript
// - etc
var arrayOfBlobs = [];
setInterval(function() {
arrayOfBlobs.append(nextChunk());
// NEW: Try to flush our queue of video data to the video element
appendToSourceBuffer();
}, 1000);
// 1. Create a `MediaSource`
var mediaSource = new MediaSource();
// 2. Create an object URL from the `MediaSource`
var url = URL.createObjectURL(mediaSource);
// 3. Set the video's `src` to the object URL
var video = document.getElementById("video");
video.src = url;
// 4. On the `sourceopen` event, create a `SourceBuffer`
var sourceBuffer = null;
mediaSource.addEventListener("sourceopen", function()
{
// NOTE: Browsers are VERY picky about the codec being EXACTLY
// right here. Make sure you know which codecs you're using!
sourceBuffer = mediaSource.addSourceBuffer("video/webm; codecs=\"opus,vp8\"");
// If we requested any video data prior to setting up the SourceBuffer,
// we want to make sure we only append one blob at a time
sourceBuffer.addEventListener("updateend", appendToSourceBuffer);
});
// 5. Use `SourceBuffer.appendBuffer()` to add all of your chunks to the video
function appendToSourceBuffer()
{
if (
mediaSource.readyState === "open" &&
sourceBuffer &&
sourceBuffer.updating === false
)
{
sourceBuffer.appendBuffer(arrayOfBlobs.shift());
}
// Limit the total buffer size to 20 minutes
// This way we don't run out of RAM
if (
video.buffered.length &&
video.buffered.end(0) - video.buffered.start(0) > 1200
)
{
sourceBuffer.remove(0, video.buffered.end(0) - 1200)
}
}
</script>
</body>
</html>
As an added bonus this automatically gives you DVR functionality for live streams, because you're retaining 20 minutes of video data in your buffer (you can seek by simply using video.currentTime = ...)
Adding to the previous answer...
make sure to add sourceBuffer.mode = 'sequence' in the MediaSource.onopen event handler to ensure the data is appended based on the order it is received. The default value is segments, which buffers until the next 'expected' timeframe is loaded.
Additionally, make sure that you are not sending any packets with a data.size === 0, and make sure that there is 'stack' by clearing the stack on the broadcasting side, unless you are wanting to record it as an entire video, in which case just make sure the size of the broadcast video is small enough, and that your internet speed is fast. The smaller and lower the resolution the more likely you can keep a realtime connection with a client, ie a video call.
For iOS the broadcast needs to made from a iOS/macOS application, and be in mp4 format. The video chunk gets saved to the app's cache and then removed once it is sent to the server. A client can connect to the stream using either a web browser or app across nearly any device.

Web audio API does not play sound Continuously

I am trying to buffer MP3 songs using node js and socket io in real time. I basically divide the MP3 into segments of bytes and send it over to the client side where the web audio API will receive it, decode it and start to play it. The issue here is that the sound does not play continuously, there is a something like a 0.5 seconds gap between every buffered segment. How can I solve this problem
// buffer is a 2 seconds decoded audio ready to be played
// the function is called when a new buffer is recieved
function stream(buffer)
{
// creates a new buffer source and connects it to the Audio context
var source = context.createBufferSource();
source.buffer = buffer;
source.connect(context.destination);
source.loop = false;
// sets it and updates the time
source.start(time + context.currentTime);
time += buffer.duration; // time is global variable initially set to zero
}
The part where stream is called
// where stream bufferQ is an array of decoded MP3 data
// so the stream function is called after every 3 segments that are recieved
// the Web audio Api plays it with gaps between the sound
if(bufferQ.length == 3)
{
for(var i = 0, n = bufferQ.length ; i < n; i++)
{
stream(bufferQ.splice(0,1)[0]);
}
}
should I use a different API other than the web audio API or is there a way to schedule my buffer so that it would be played continuously ?
context.currentTime will vary depending on when it is evaluated, and every read has an implicit inaccuracy due to being rounded to the nearest 2ms or so (see Firefox BaseAudioContext/currentTime#Reduced time precision). Consider:
function stream(buffer)
{
...
source.start(time + context.currentTime);
time += buffer.duration; // time is global variable initially set to zero
Calling source.start(time + context.currentTime) for every block of PCM data will always start the playback of that block at whatever the currentTime is now (which is not necessarily related to the playback time) rounded to 2ms, plus the time offset.
For playing back-to-back PCM chunks, read currentTime once at the beginning of the stream, then add each duration to it after scheduling the playback. For example, PCMPlayer does:
PCMPlayer.prototype.flush = function() {
...
if (this.startTime < this.audioCtx.currentTime) {
this.startTime = this.audioCtx.currentTime;
}
...
bufferSource.start(this.startTime);
this.startTime += audioBuffer.duration;
};
Note startTime is only reset when it represents a time in the past - for continuous buffered streaming it is not reset as it will be a value some time in the future. In each call to flush, startTime is used to schedule playback, and is only increased by each PCM data duration, it does not depend on currentTime.
Another potential issue is that the sample rate of the PCM buffer that you are decoding may not match the sample rate of the AudioContext. In this case, the browser resamples each PCM buffer separately, resulting in discontinuities at the boundaries of the chunks. See Clicking sounds in Stream played with Web Audio Api.
It's an issue with mp3 files, each mp3 file has a few frames of silence at the start and end.
If you use wav files or time the start and stop of each file properly you can fix it

Categories