Stream & play Audios received via TCP packets in JavaScript/Html5 - javascript

I have certain TCP packets that contain audio datas in it.I am a newbie to Js and html5. I had written something like this which creates a buzzing sound. I wanted to know how audio samples look like in a TCP packet. I had captured TCP packets using wire-shark, the data segment shows some characters and its ASCII value. I believe these characters are the audio data. Assuming so, How can I pass those to myArrayBuffer like below?
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var source = audioCtx.createBufferSource();
function Set1()
{
try {
// Create an empty three-second stereo buffer at the sample rate of the AudioContext
var myArrayBuffer = audioCtx.createBuffer(2, audioCtx.sampleRate * 30, audioCtx.sampleRate);
// Fill the buffer with white noise;
// just random values between -1.0 and 1.0
for (var channel = 0; channel < myArrayBuffer.numberOfChannels; channel++) {
// This gives us the actual array that contains the data
var nowBuffering = myArrayBuffer.getChannelData(channel);
for (var i = 0; i < myArrayBuffer.length; i++) {
// Math.random() is in [0; 1.0]
// audio needs to be in [-1.0; 1.0]
nowBuffering[i] = Math.random() * 2 - 1;
}
}
// set the buffer in the AudioBufferSourceNode
source.buffer = myArrayBuffer;
// connect the AudioBufferSourceNode to the
// destination so we can hear the sound
source.connect(audioCtx.destination);
// start the source playing
source.start();
}
catch(e) {
alert('Web Audio API is not supported in this browser');
}
}
function stop()
{
try {
source.stop();
}
catch(e) {
alert('Unable to stop file');
}
}

Related

Is it possible to avoid the "AudioContext was not allowed to start" alert?

I faced a browser policy problem "The AudioContext was not allowed to start. It must be resumed (or created) after a user gesture on the page."
I wanted to execute that below code when page loaded.
So, I tried to mock the user click gesture using a hidden button and load event listener but failed.
Is it possible??
let my_array = [];
function my_function() {
let audioCtx = new (window.AudioContext || window.webkitAudioContext)();
let analyser = audioCtx.createAnalyser();
let oscillator = audioCtx.createOscillator();
oscillator.type = "triangle"; // Set oscillator to output triangle wave
oscillator.connect(analyser); // Connect oscillator output to analyser input
let gain = audioCtx.createGain();
let scriptProcessor = audioCtx.createScriptProcessor(4096, 1, 1);
analyser.connect(scriptProcessor); // Connect analyser output to scriptProcessor input
scriptProcessor.connect(gain); // Connect scriptProcessor output to gain input
gain.connect(audioCtx.destination); // Connect gain output to audiocontext destination
gain.gain.value = 0; // Disable volume
scriptProcessor.onaudioprocess = function (bins) {
bins = new Float32Array(analyser.frequencyBinCount);
analyser.getFloatFrequencyData(bins);
for (var i = 0; i < bins.length; i = i + 1) {
my_array.push(bins[i]);
}
analyser.disconnect();
scriptProcessor.disconnect();
gain.disconnect();
};
// audioCtx.resume().then(() => {
// oscillator.start(0);
// });
oscillator.start(0);
}

Javascript captureStream with audio [duplicate]

I'm working on a project in which I'd like to:
Load a video js and display it on the canvas.
Use filters to alter the appearance of the canvas (and therefore the video).
Use the MediaStream captureStream() method and a MediaRecorder object to record the surface of the canvas and the audio of the original video.
Play the stream of both the canvas and the audio in an HTML video element.
I've been able to display the canvas recording in a video element by tweaking this WebRTC demo code: https://webrtc.github.io/samples/src/content/capture/canvas-record/
That said, I can't figure out how to record the video's audio alongside the canvas. Is it possible to create a MediaStream containing MediaStreamTrack instances from two different sources/elements?
According to the MediaStream API's specs there should theoretically be some way to accomplish this:
https://w3c.github.io/mediacapture-main/#introduction
"The two main components in the MediaStream API are the MediaStreamTrack and MediaStream interfaces. The MediaStreamTrack object represents media of a single type that originates from one media source in the User Agent, e.g. video produced by a web camera. A MediaStream is used to group several MediaStreamTrack objects into one unit that can be recorded or rendered in a media element."
Is it possible to create a MediaStream containing MediaStreamTrack instances from two different sources/elements?
Yes, you can do it using the MediaStream.addTrack() method, or new MediaStream([track1, track2]).
OP already known how to get all of it, but here is a reminder for future readers :
To get a video stream track from a <canvas>, you can call canvas.captureStream(framerate) method.
To get an audio stream track from a <video> element you can use the Web Audio API and it's createMediaStreamDestination method.
This will return a MediaStreamAudioDestinationNode node (dest) containing our audio stream. You'll then have to connect a MediaElementAudioSourceNode created from your <video> element, to this dest.
If you need to add more audio tracks to this stream, you should connect all these sources to dest.
Now that we've got two streams, one for the <canvas> video and one for the audio, we can either add the audio track to the canvas stream before we initialize the recorder:
canvasStream.addTrack(audioStream.getAudioTracks()[0]);
const recorder = new MediaRecorder(canvasStream)
or we can create a third MediaStream object from these two tracks:
const [videoTrack] = canvasStream.getVideoTracks();
const [audioTrack] = audioStream.getAudioTracks();
const recordedStream = new MediaStream(videoTrack, audioTrack)
const recorder = new MediaRecorder(recordedStream);
Here is a complete example:
var
btn = document.querySelector("button"),
canvas,
cStream,
aStream,
vid,
recorder,
analyser,
dataArray,
bufferLength,
chunks = [];
function clickHandler() {
btn.textContent = 'stop recording';
if (!aStream) {
initAudioStream();
}
cStream = canvas.captureStream(30);
cStream.addTrack(aStream.getAudioTracks()[0]);
recorder = new MediaRecorder(cStream);
recorder.start();
recorder.ondataavailable = saveChunks;
recorder.onstop = exportStream;
btn.onclick = stopRecording;
};
function exportStream(e) {
if (chunks.length) {
var blob = new Blob(chunks, { type: chunks[0].type });
var vidURL = URL.createObjectURL(blob);
var vid = document.createElement('video');
vid.controls = true;
vid.src = vidURL;
vid.onend = function() {
URL.revokeObjectURL(vidURL);
}
document.body.insertBefore(vid, canvas);
} else {
document.body.insertBefore(document.createTextNode('no data saved'), canvas);
}
}
function saveChunks(e) {
e.data.size && chunks.push(e.data);
}
function stopRecording() {
vid.pause();
btn.remove();
recorder.stop();
}
function initAudioStream() {
var audioCtx = new AudioContext();
// create a stream from our AudioContext
var dest = audioCtx.createMediaStreamDestination();
aStream = dest.stream;
// connect our video element's output to the stream
var sourceNode = audioCtx.createMediaElementSource(vid);
sourceNode.connect(dest)
// start the video
vid.play();
// just for the fancy canvas drawings
analyser = audioCtx.createAnalyser();
sourceNode.connect(analyser);
analyser.fftSize = 2048;
bufferLength = analyser.frequencyBinCount;
dataArray = new Uint8Array(bufferLength);
analyser.getByteTimeDomainData(dataArray);
// output to our headphones
sourceNode.connect(audioCtx.destination)
startCanvasAnim();
}
function enableButton() {
vid.oncanplay = null;
btn.onclick = clickHandler;
btn.disabled = false;
};
var loadVideo = function() {
vid = document.createElement('video');
vid.crossOrigin = 'anonymous';
vid.oncanplay = enableButton;
vid.src = 'https://dl.dropboxusercontent.com/s/bch2j17v6ny4ako/movie720p.mp4';
}
function startCanvasAnim() {
// from MDN https://developer.mozilla.org/en/docs/Web/API/AnalyserNode#Examples
canvas = Object.assign(document.createElement("canvas"), { width: 500, height: 200});
document.body.prepend(canvas);
var canvasCtx = canvas.getContext('2d');
canvasCtx.fillStyle = 'rgb(200, 200, 200)';
canvasCtx.lineWidth = 2;
canvasCtx.strokeStyle = 'rgb(0, 0, 0)';
var draw = function() {
var drawVisual = requestAnimationFrame(draw);
analyser.getByteTimeDomainData(dataArray);
canvasCtx.fillRect(0, 0, canvas.width, canvas.height);
canvasCtx.beginPath();
var sliceWidth = canvas.width * 1.0 / bufferLength;
var x = 0;
for (var i = 0; i < bufferLength; i++) {
var v = dataArray[i] / 128.0;
var y = v * canvas.height / 2;
if (i === 0) {
canvasCtx.moveTo(x, y);
} else {
canvasCtx.lineTo(x, y);
}
x += sliceWidth;
}
canvasCtx.lineTo(canvas.width, canvas.height / 2);
canvasCtx.stroke();
};
draw();
}
loadVideo();
button { vertical-align: top }
<button disabled>record</button>
Kaiido's demo is brilliant. For those just looking for the tl;dr code to add an audio stream to their existing canvas stream:
let videoOrAudioElement = /* your audio source element */;
// get the audio track:
let ctx = new AudioContext();
let dest = ctx.createMediaStreamDestination();
let sourceNode = ctx.createMediaElementSource(videoOrAudioElement);
sourceNode.connect(dest);
sourceNode.connect(ctx.destination);
let audioTrack = dest.stream.getAudioTracks()[0];
// add it to your canvas stream:
canvasStream.addTrack(audioTrack);
// use your canvas stream like you would normally:
let recorder = new MediaRecorder(canvasStream);
// ...

Is it possible to get raw values of audio data using MediaRecorder()

I'm using MediaRecorder() with getUserMedia() to record audio data from the browser. It works, but recorded data is recorded in the Blob format. I want to get raw audio data (amplitudes), not the Blobs. Is it possible to do it?
My code looks like this:
navigator.mediaDevices.getUserMedia({audio: true, video: false}).then(stream => {
const recorder = new MediaRecorder(stream);
recorder.ondataavailable = e => {
console.log(e.data); // output: Blob { size: 8452, type: "audio/ogg; codecs=opus" }
};
recorder.start(1000); // send data every 1s
}).catch(console.error);
MediaRecorder is useful to create files; if you want to do audio processing, Web Audio would be a better approach. See this HTML5Rocks tutorial which shows how to integrate getUserMedia with Web Audio using createMediaStreamSource from Web Audio.
No need for MediaRecorder. Use web audio to gain access to raw data values, e.g. like this:
navigator.mediaDevices.getUserMedia({audio: true})
.then(spectrum).catch(console.log);
function spectrum(stream) {
const audioCtx = new AudioContext();
const analyser = audioCtx.createAnalyser();
audioCtx.createMediaStreamSource(stream).connect(analyser);
const canvas = div.appendChild(document.createElement("canvas"));
canvas.width = window.innerWidth - 20;
canvas.height = window.innerHeight - 20;
const ctx = canvas.getContext("2d");
const data = new Uint8Array(canvas.width);
ctx.strokeStyle = 'rgb(0, 125, 0)';
setInterval(() => {
ctx.fillStyle = "#a0a0a0";
ctx.fillRect(0, 0, canvas.width, canvas.height);
analyser.getByteFrequencyData(data);
ctx.lineWidth = 2;
let x = 0;
for (let d of data) {
const y = canvas.height - (d / 128) * canvas.height / 4;
const c = Math.floor((x*255)/canvas.width);
ctx.fillStyle = `rgb(${c},0,${255-x})`;
ctx.fillRect(x++, y, 2, canvas.height - y)
}
analyser.getByteTimeDomainData(data);
ctx.lineWidth = 5;
ctx.beginPath();
x = 0;
for (let d of data) {
const y = canvas.height - (d / 128) * canvas.height / 2;
x ? ctx.lineTo(x++, y) : ctx.moveTo(x++, y);
}
ctx.stroke();
}, 1000 * canvas.width / audioCtx.sampleRate);
};
I used the following code to achieve an array of integers into console. As to what they mean, I assume they are amplitudes.
const mediaRecorder = new MediaRecorder(mediaStream);
mediaRecorder.ondataavailable = async (blob: BlobEvent) => console.log(await blob.data.arrayBuffer());
mediaRecorder.start(100);
If you were to want to draw them, then I found this example that I had forked on codepen: https://codepen.io/rhwilburn/pen/vYXggbN which pulses the circle once you click on it.
This is a complex subject because the MediaRecorder API is able to record audio using multiple different container formats, and even further to that, it also supports several audio codecs. However, let's say you tie down your audio recordings to be of a certain codec and container type, for example, if from the front end you force all recordings to use the WebM container format with Opus encoded audio:
new MediaRecorder(stream as MediaStream, {
mimeType: "audio/webm;codecs=opus",
});
If the browser supports this type of recording (the latest Firefox / Edge / Chrome seem to do it for me, but may not work in all browsers and versions thereof, for example I can see that as of right now, Safari only provides partial support for this and therefore it may not work in that browser), then you can write some code to decode the WebM container's Opus encoded chunks, and then decode those chunks into raw PCM audio.
I see that these WebM/Opus containers produced by MediaRecorder in Firefox / Edge / Chrome typically have a SamplingFrequency block and a Channels block at the start, which indicate the sample rate in Hz and the number of channels.
Then, they typically start clustering audio using Cluster blocks, which have a Timestamp block immediately after the Cluster block (but not necessarily guaranteed to be after, as per the WebM specification), followed by Simple blocks which contain binary Opus audio data. Clusters of Simple blocks are ordered by the timestamp of the Cluster block, and the Simple blocks within those clusters are ordered by the timecode header within the Opus audio chunk contained within them, which is stored in a signed two's compliment format.
So, putting all of this together, this seems to work for me in Node.js using Typescript:
import opus = require('#discordjs/opus');
import ebml = require('ts-ebml');
class BlockCluster {
timestamp: number;
encodedChunks: Buffer[] = [];
}
async function webMOpusToPcm(buffer: Buffer): Promise<{ pcm: Buffer, duration: number }> {
const decoder = new ebml.Decoder();
let rate: number;
let channels: number;
let clusters: BlockCluster[] = [];
for await (const element of decoder.decode(buffer)) {
if (element.name === 'SamplingFrequency' && element.type === 'f')
rate = (element as any).value;
if (element.name === 'Channels' && element.type === 'u')
channels = (element as any).value;
if (element.name === 'Cluster' && element.type === 'm')
clusters.push(new BlockCluster());
if (element.name === 'Timestamp' && element.type === 'u')
clusters[clusters.length - 1].timestamp = (element as any).value;
if (element.name === 'SimpleBlock' && element.type === 'b') {
const data: Uint8Array = (element as any).data;
const dataBuffer: Buffer = Buffer.from(data);
clusters[clusters.length - 1].encodedChunks.push(dataBuffer);
}
}
clusters.sort((a, b) => {
return a.timestamp - b.timestamp;
});
let chunks: Uint8Array[] = [];
clusters.forEach(cluster => {
cluster.encodedChunks.sort((a, b) => {
const timecodeA = readTimecode(a);
const timecodeB = readTimecode(b);
return timecodeA - timecodeB;
});
cluster.encodedChunks.forEach(chunk => {
const opusChunk = readOpusChunk(chunk);
chunks.push(opusChunk);
});
});
let pcm: Buffer = Buffer.alloc(0);
const opusDecoder = new opus.OpusEncoder(rate, channels);
chunks.forEach(chunk => {
const opus = Buffer.from(chunk);
const decoded: Buffer = opusDecoder.decode(opus);
pcm = Buffer.concat([pcm, decoded]);
});
const totalSamples = (pcm.byteLength * 8) / (channels * 16);
const duration = totalSamples / rate;
return { pcm, duration };
}
function readOpusChunk(block: Buffer): Buffer {
return block.slice(4);
}
function readTimecode(block: Buffer): number {
const timecode = (block.readUInt8(0) << 16) | (block.readUInt8(1) << 8) | block.readUInt8(2);
return (timecode & 0x800000) ? -(0x1000000 - timecode) : timecode;
}
Note that your input buffer needs to be the WebM/Opus container, and that when playing back the raw PCM audio, you must correctly set the sample rate in Hz and the number of channels must also be set correctly (to the number of channels you read from the WebM container), otherwise the audio will sound distorted. I think that the #discordjs/opus library currently uses a consistent bit depth of 16 to encode raw audio, so you may also have to set the bit depth of the playback to 16.
Furthermore, what works for me today, may not work tomorrow. Libraries may change how they work, browsers may change how they record and containerize, the codecs may change, some browsers may not even support recording using this format, and the format of the containers may change as well. There are a lot of variables, so please take caution.

MediaStream Capture Canvas and Audio Simultaneously

I'm working on a project in which I'd like to:
Load a video js and display it on the canvas.
Use filters to alter the appearance of the canvas (and therefore the video).
Use the MediaStream captureStream() method and a MediaRecorder object to record the surface of the canvas and the audio of the original video.
Play the stream of both the canvas and the audio in an HTML video element.
I've been able to display the canvas recording in a video element by tweaking this WebRTC demo code: https://webrtc.github.io/samples/src/content/capture/canvas-record/
That said, I can't figure out how to record the video's audio alongside the canvas. Is it possible to create a MediaStream containing MediaStreamTrack instances from two different sources/elements?
According to the MediaStream API's specs there should theoretically be some way to accomplish this:
https://w3c.github.io/mediacapture-main/#introduction
"The two main components in the MediaStream API are the MediaStreamTrack and MediaStream interfaces. The MediaStreamTrack object represents media of a single type that originates from one media source in the User Agent, e.g. video produced by a web camera. A MediaStream is used to group several MediaStreamTrack objects into one unit that can be recorded or rendered in a media element."
Is it possible to create a MediaStream containing MediaStreamTrack instances from two different sources/elements?
Yes, you can do it using the MediaStream.addTrack() method, or new MediaStream([track1, track2]).
OP already known how to get all of it, but here is a reminder for future readers :
To get a video stream track from a <canvas>, you can call canvas.captureStream(framerate) method.
To get an audio stream track from a <video> element you can use the Web Audio API and it's createMediaStreamDestination method.
This will return a MediaStreamAudioDestinationNode node (dest) containing our audio stream. You'll then have to connect a MediaElementAudioSourceNode created from your <video> element, to this dest.
If you need to add more audio tracks to this stream, you should connect all these sources to dest.
Now that we've got two streams, one for the <canvas> video and one for the audio, we can either add the audio track to the canvas stream before we initialize the recorder:
canvasStream.addTrack(audioStream.getAudioTracks()[0]);
const recorder = new MediaRecorder(canvasStream)
or we can create a third MediaStream object from these two tracks:
const [videoTrack] = canvasStream.getVideoTracks();
const [audioTrack] = audioStream.getAudioTracks();
const recordedStream = new MediaStream(videoTrack, audioTrack)
const recorder = new MediaRecorder(recordedStream);
Here is a complete example:
var
btn = document.querySelector("button"),
canvas,
cStream,
aStream,
vid,
recorder,
analyser,
dataArray,
bufferLength,
chunks = [];
function clickHandler() {
btn.textContent = 'stop recording';
if (!aStream) {
initAudioStream();
}
cStream = canvas.captureStream(30);
cStream.addTrack(aStream.getAudioTracks()[0]);
recorder = new MediaRecorder(cStream);
recorder.start();
recorder.ondataavailable = saveChunks;
recorder.onstop = exportStream;
btn.onclick = stopRecording;
};
function exportStream(e) {
if (chunks.length) {
var blob = new Blob(chunks, { type: chunks[0].type });
var vidURL = URL.createObjectURL(blob);
var vid = document.createElement('video');
vid.controls = true;
vid.src = vidURL;
vid.onend = function() {
URL.revokeObjectURL(vidURL);
}
document.body.insertBefore(vid, canvas);
} else {
document.body.insertBefore(document.createTextNode('no data saved'), canvas);
}
}
function saveChunks(e) {
e.data.size && chunks.push(e.data);
}
function stopRecording() {
vid.pause();
btn.remove();
recorder.stop();
}
function initAudioStream() {
var audioCtx = new AudioContext();
// create a stream from our AudioContext
var dest = audioCtx.createMediaStreamDestination();
aStream = dest.stream;
// connect our video element's output to the stream
var sourceNode = audioCtx.createMediaElementSource(vid);
sourceNode.connect(dest)
// start the video
vid.play();
// just for the fancy canvas drawings
analyser = audioCtx.createAnalyser();
sourceNode.connect(analyser);
analyser.fftSize = 2048;
bufferLength = analyser.frequencyBinCount;
dataArray = new Uint8Array(bufferLength);
analyser.getByteTimeDomainData(dataArray);
// output to our headphones
sourceNode.connect(audioCtx.destination)
startCanvasAnim();
}
function enableButton() {
vid.oncanplay = null;
btn.onclick = clickHandler;
btn.disabled = false;
};
var loadVideo = function() {
vid = document.createElement('video');
vid.crossOrigin = 'anonymous';
vid.oncanplay = enableButton;
vid.src = 'https://dl.dropboxusercontent.com/s/bch2j17v6ny4ako/movie720p.mp4';
}
function startCanvasAnim() {
// from MDN https://developer.mozilla.org/en/docs/Web/API/AnalyserNode#Examples
canvas = Object.assign(document.createElement("canvas"), { width: 500, height: 200});
document.body.prepend(canvas);
var canvasCtx = canvas.getContext('2d');
canvasCtx.fillStyle = 'rgb(200, 200, 200)';
canvasCtx.lineWidth = 2;
canvasCtx.strokeStyle = 'rgb(0, 0, 0)';
var draw = function() {
var drawVisual = requestAnimationFrame(draw);
analyser.getByteTimeDomainData(dataArray);
canvasCtx.fillRect(0, 0, canvas.width, canvas.height);
canvasCtx.beginPath();
var sliceWidth = canvas.width * 1.0 / bufferLength;
var x = 0;
for (var i = 0; i < bufferLength; i++) {
var v = dataArray[i] / 128.0;
var y = v * canvas.height / 2;
if (i === 0) {
canvasCtx.moveTo(x, y);
} else {
canvasCtx.lineTo(x, y);
}
x += sliceWidth;
}
canvasCtx.lineTo(canvas.width, canvas.height / 2);
canvasCtx.stroke();
};
draw();
}
loadVideo();
button { vertical-align: top }
<button disabled>record</button>
Kaiido's demo is brilliant. For those just looking for the tl;dr code to add an audio stream to their existing canvas stream:
let videoOrAudioElement = /* your audio source element */;
// get the audio track:
let ctx = new AudioContext();
let dest = ctx.createMediaStreamDestination();
let sourceNode = ctx.createMediaElementSource(videoOrAudioElement);
sourceNode.connect(dest);
sourceNode.connect(ctx.destination);
let audioTrack = dest.stream.getAudioTracks()[0];
// add it to your canvas stream:
canvasStream.addTrack(audioTrack);
// use your canvas stream like you would normally:
let recorder = new MediaRecorder(canvasStream);
// ...

How can I clear buffer in XAudioJS

I am using XAudioJS (https://github.com/taisel/XAudioJS) for a current project and I need to do the following:
On mousedown, a sine wave should be played. I did this by generating a sine wave in the background and when the event is fired, the volume is set to 1 or 0 accordingly
When a user clicks on a div, a sequence of samples is being played. Other userinput is paused during this time
I could manage to do both independently but I can't bring them both together. When the user clicks on the div, the audio sequence is played but first a bit of the sine wave which was muted was also played
I suspect that it has to do with the buffer of XAudioJS. It is not cleared and therefore plays parts of the sine wave which is still in the buffer. I can't figure out a way though, how to clear it before playing the sequence.
Also I think I don't really understand what buffer-low and buffer-high for XAudioJS exactly means or which values are better for performance and which are not.
Here is my underRunCallback function I gave to XAudioJS:
var sample = []; // samples for the sample sequence
var samplePos = 0; // counter for samples
var sampleRateUsed = 8000;
var frequencyUsed = 550; //frequency for sine wave
var frequencyCounter = 0;
var counterIncrementAmount = Math.PI * 2 * frequencyUsed / sampleRateUsed;
function underRunCallback(samplesToGenerate) {
//Generate audio on the fly=>
if (samplesToGenerate == 0) {
return [];
}
var tempBuffer = []; // buffer that is returned
// create sine wave on the fly:
if(!isPlaying){
while (samplesToGenerate--) {
tempBuffer.push(Math.sin(frequencyCounter));
frequencyCounter += counterIncrementAmount;
}
return tempBuffer;
}
// NEED TO CLEAR BUFFER HERE
// play sequence of samples:
samplesToGenerate = Math.min(samplesToGenerate, sample.length - samplePos);
if(samplePos == sample.length){ // if all samples of sequence played
input_paused = false; // allow new userinput
isPlaying = false;
xaudioHandle.changeVolume(0); // mute audio
}
if (samplesToGenerate > 0) { // if new slice of samples is needed
tempBuffer = sample.slice(samplePos, samplePos + samplesToGenerate);
samplePos += samplesToGenerate;
return tempBuffer;
} else {
isPlayingB = false;
return [];
}
}

Categories