How can I play an arrayBuffer as an audio file? - javascript

I am receiving an arrayBuffer via a socket.io event and want to be able to process and play the stream as an audio file.
I am receiving the buffer like so:
retrieveAudioStream = () => {
this.socket.on('stream', (arrayBuffer) => {
console.log('arrayBuffer', arrayBuffer)
})
}
Is it possible to set the src attribute of an <audio/> element to a buffer? If not how can I play the the incoming buffer stream?
edit:
To show how I am getting my audio input and streaming it:
window.navigator.getUserMedia(constraints, this.initializeRecorder, this.handleError);
initializeRecorder = (stream) => {
const audioContext = window.AudioContext;
const context = new audioContext();
const audioInput = context.createMediaStreamSource(stream);
const bufferSize = 2048;
// create a javascript node
const recorder = context.createScriptProcessor(bufferSize, 1, 1);
// specify the processing function
recorder.onaudioprocess = this.recorderProcess;
// connect stream to our recorder
audioInput.connect(recorder);
// connect our recorder to the previous destination
recorder.connect(context.destination);
}
This is where I receive the inputBuffer event and stream via a socket.io event
recorderProcess = (e) => {
const left = e.inputBuffer.getChannelData(0);
this.socket.emit('stream', this.convertFloat32ToInt16(left))
}
EDIT 2:
Adding Raymonds suggestion:
retrieveAudioStream = () => {
const audioContext = new window.AudioContext();
this.socket.on('stream', (buffer) => {
const b = audioContext.createBuffer(1, buffer.length, audioContext.sampleRate);
b.copyToChannel(buffer, 0, 0)
const s = audioContext.createBufferSource();
s.buffer = b
})
}
Getting error: NotSupportedError: Failed to execute 'createBuffer' on 'BaseAudioContext': The number of frames provided (0) is less than or equal to the minimum bound (0).

Based on a quick read of what initializeRecorder and recorderProcess do, it looks like you're converting the float32 samples to int16 in some say and that gets sent to retrieveAudioStream in some way.
If this is correct, then the arrayBuffer is an array of int16 values. Convert them to float32 (most likely by dividing each value by 32768) and save them in a Float32Array. Then create an AudioBuffer of the same lenght and copyToChannel(float32Array, 0, 0) to write the values to the AudioBuffer. Use an AudioBufferSourceNode with this buffer to play out the audio.

Related

Struggling to playback a Float 32 Array (Web Audio API)

I'm building a simple looper, to help me come to an understanding of the Web Audio API however struggling to to get a buffer source to play back the recorded audio.
The code has been simplified as much as possible however with annotation it's still 70+ lines, ommitting the CSS and HTML, so apologies for that. A version including the CSS and HTML can be found on JSFiddle:
https://jsfiddle.net/b5w9j4yk/10/
Any help would be much appreciated. Thank you :)
// Aim of the code is to record the input from the mike to a float32 array. then prass that to a buffer which is linked to a buffer source, so the audio can be played back.
// Grab DOM Elements
const playButton = document.getElementById('play');
const recordButton = document.getElementById('record');
// If allowed access to microphone run this code
const promise = navigator.mediaDevices.getUserMedia({audio: true, video: false})
.then((stream) => {
recordButton.addEventListener('click', () => {
// when the record button is pressed clear enstanciate the record buffer
if (!recordArmed) {
recordArmed = true;
recordButton.classList.add('on');
console.log('recording armed')
recordBuffer = new Float32Array(audioCtx.sampleRate * 10);
}
else {
recordArmed = false;
recordButton.classList.remove('on');
// After the recording has stopped pass the recordBuffer the source's buffer
myArrayBuffer.copyToChannel(recordBuffer, 0);
//Looks like the buffer has been passed
console.log(myArrayBuffer.getChannelData(0));
}
});
// this should stat the playback of the source intended to be used adter the audio has been recorded, I can't get it to work in this given context
playButton.addEventListener('click', () => {
playButton.classList.add('on');
source.start();
});
//Transport variables
let recordArmed = false;
let playing = false;
// this buffer will later be assigned a Float 32 Array / I'd like to keep this intimediate buffer so the audio can be sliced and minipulated with ease later
let recordBuffer;
// Declear Context, input source and a block processor to pass the input sorce to the recordBuffer
const audioCtx = new AudioContext();
const audioIn = audioCtx.createMediaStreamSource(stream);
const processor = audioCtx.createScriptProcessor(512, 1, 1);
// Create a source and corrisponding buffer for playback and then assign link
const myArrayBuffer = audioCtx.createBuffer(1, audioCtx.sampleRate * 10, audioCtx.sampleRate);
const source = audioCtx.createBufferSource();
source.buffer = myArrayBuffer;
// Audio Routing
audioIn.connect(processor);
source.connect(audioCtx.destination);
// When recording is armed pass the samples of the block one at a time to the record buffer
processor.onaudioprocess = ((audioProcessingEvent) => {
let inputBuffer = audioProcessingEvent.inputBuffer;
let i = 0;
if (recordArmed) {
for (let channel = 0; channel < inputBuffer.numberOfChannels; channel++) {
let inputData = inputBuffer.getChannelData(channel);
let avg = 0;
inputData.forEach(sample => {
recordBuffer.set([sample], i);
i++;
});
}
}
else {
i = 0;
}
});
})

Play raw audio with JavaScript

I have a stream of numbers like this
-0.00015259254737998596,-0.00009155552842799158,0.00009155552842799158,0.00021362956633198035,0.0003662221137119663,0.0003967406231879635,0.00024414807580797754,0.00012207403790398877,0.00012207403790398877,0.00012207403790398877,0.0003357036042359691,0.0003357036042359691,0.00018311105685598315,0.00003051850947599719,0,-0.00012207403790398877,0.00006103701895199438,0.00027466658528397473,0.0003967406231879635,0.0003967406231879635,0.0003967406231879635,0.0003967406231879635,0.0003967406231879635,0.0003662221137119663,0.0004882961516159551,0.0004577776421399579,0.00027466658528397473,0.00003051850947599719,-0.00027466658528397473....
Which supposedly represent an audio stream. I got them from here and I've transmitted them over the web, now I'm trying to play the actual sound and I got a snippet from here but I'm getting Uncaught (in promise) DOMException: Unable to decode audio data
I feel like I'm missing quite a lot I just expect this to work like magic and it just could not be the case..
My code
var ws = new WebSocket("ws://....");
ws.onmessage = function (event) {
playByteArray(event.data);
}
var context = new AudioContext();
function playByteArray(byteArray) {
var arrayBuffer = new ArrayBuffer(byteArray.length);
var bufferView = new Uint8Array(arrayBuffer);
for (var i = 0; i < byteArray.length; i++) {
bufferView[i] = byteArray[i];
}
context.decodeAudioData(arrayBuffer, function (buffer) {
buf = buffer;
play();
});
}
// Play the loaded file
function play() {
// Create a source node from the buffer
var source = context.createBufferSource();
source.buffer = buf;
// Connect to the final output node (the speakers)
source.connect(context.destination);
// Play immediately
source.start(0);
}
And the broadcasting part
var ws = new WebSocket("ws://.....");
window.addEventListener("audioinput", function onAudioInput(evt) {
if (ws) {
ws.send(evt.data);
}
}, false);
audioinput.start({
bufferSize: 8192
});
It doesn't look like you're dealing with compatible audio data formats. The code you linked to is for playing byte arrays, in which case your audio data should be a (much longer) string of integer numbers from 0 to 255.
What you've got is a fairly short (as audio data goes) string of floating point numbers. I can't tell what audio format that's supposed to be, but it would require a different player.

How do I programatically play an webm audio I just recorded in HTML5?

I just recorded a piece of audio and I want to play it with pure Javascript code.
So this is my code:
navigator.getUserMedia({audio: true},function(stream){
var recorder = new MediaRecorder(stream);
recorder.start(1000);
recorder.ondataavailable = function(e){
console.log(e.data);
// var buffer = new Blob([e.data],{type: "video/webm"});
};
});
What do I have to do in ondataavailable so that I can play the audio chunks stored in memory without and audio or video tag in HTML?
I don't really see why you don't want an audio or video element, but anyway, the first steps are the same.
The MediaRecorder.ondataavailable event will fire at regular intervals, and will contain a data property containing a chunk of the recorded media.
You need to store these chunks, in order to be able to merge them in a single Blob at the end of the recording.
To merge them, you would simply call new Blob(chunks_array), where chunks_array is an Array containing all the chunk Blobs you got from dataavailable.data.
Once you've got this final Blob, you can use it as a normal media, e.g, either play it in a MediaElement thanks to the URL.createObjectURL method, or convert it to an ArrayBuffer and then decode it through the WebAudio API or whatever other ways you'd like.
navigator.mediaDevices.getUserMedia({audio: true})
.then(recordStream)
.catch(console.error);
function recordStream(stream){
const chunks = []; // an Array to store all our chunks
const rec = new MediaRecorder(stream);
rec.ondataavailable = e => chunks.push(e.data);
rec.onstop = e => {
stream.getTracks().forEach(s => s.stop());
finalize(chunks);
};
rec.start();
setTimeout(()=>rec.stop(), 5000); // stop the recorder in 5s
}
function finalize(chunks){
const blob = new Blob(chunks);
playMedia(blob);
}
function playMedia(blob){
const ctx = new AudioContext();
const fileReader = new FileReader();
fileReader.onload = e => ctx.decodeAudioData(fileReader.result)
.then(buf => {
btn.onclick = e => {
const source = ctx.createBufferSource();
source.buffer = buf;
source.connect(ctx.destination);
source.start(0);
};
btn.disabled = false;
});
fileReader.readAsArrayBuffer(blob);
}
<button id="btn" disabled>play</button>
And as a plnkr for chrome and its heavy iframes restrictions.

Web Audio API getFloatFrequencyData function setting Float32Array argument data to array of -Infinity values

I'm currently playing around with the Web Audio API in Chrome (60.0.3112.90) to possibly build a sound wave of a given file via FilerReader, AudioContext, createScriptProcessor, and createAnalyser. I have the following code:
const visualize = analyser => {
analyser.fftSize = 256;
let bufferLength = analyser.frequencyBinCount;
let dataArray = new Float32Array(bufferLength);
analyser.getFloatFrequencyData(dataArray);
}
loadAudio(file){
// creating FileReader to convert audio file to an ArrayBuffer
const fileReader = new FileReader();
navigator.getUserMedia = (navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia);
fileReader.addEventListener('loadend', () => {
const fileArrayBuffer = fileReader.result;
let audioCtx = new (window.AudioContext || window.webkitAudioContext)();
let processor = audioCtx.createScriptProcessor(4096, 1, 1);
let analyser = audioCtx.createAnalyser();
analyser.connect(processor);
let data = new Float32Array(analyser.frequencyBinCount);
let soundBuffer;
let soundSource = audioCtx.createBufferSource();
// loading audio track into buffer
audioCtx.decodeAudioData(
fileArrayBuffer,
buffer => {
soundBuffer = buffer;
soundSource.buffer = soundBuffer;
soundSource.connect(analyser);
soundSource.connect(audioCtx.destination);
processor.onaudioprocess = () => {
// data becomes array of -Infinity values after call below
analyser.getFloatFrequencyData(data);
};
visuaulize(analyser);
},
error => 'error with decoding audio data: ' + error.err
);
});
fileReader.readAsArrayBuffer(file);
}
Upon loading a file, I get all the way to analyser.getFloatFrequencyData(data). Upon reading the Web audio API docs, it says that the parameter is:
The Float32Array that the frequency domain data will be copied to.
For any sample which is silent, the value is -Infinity.
In my case, I have both an mp3 and wav file I'm using to test this and after invoking analyser.getFloatFrequency(data), both files end up giving me data which becomes an array of `-Infinity' values.
This may be due to my ignorance with Web Audio's API, but my question is why are both files, which contain loud audio, giving me an array that represents silent samples?
The Web Audio AnalyserNode is only designed to work in realtime. (It used to be called RealtimeAnalyser.) Web Audio doesn't have the ability to do analysis on buffers; take a look at another library, like DSP.js.
Instead of:
soundSource.connect(analyser);
soundSource.connect(audioCtx.destination);
try:
soundSource.connect(analyser);
analyser.connect(audioCtx.destination);
Realising I sould do a source ==> anlalsyser ==>> destination chain solved this problem when I encountered it.

Using ChannelSplitter and MergeSplitter nodes in Web Audio API

I am attempting to use a ChannelSplitter node to send an audio signal into both a ChannelMerger node and to the destination, and then trying to use the ChannelMerger node to merge two different audio signals (one from the split source, one from the microphone using getUserMedia) into a recorder using Recorder.js.
I keep getting the following error: "Uncaught SyntaxError: An invalid or illegal string was specified."
The error is at the following line of code:
audioSource.splitter.connect(merger);
Where audioSource is an instance of ThreeAudio.Source from the library ThreeAudio.js, splitter is a channel splitter I instantiated myself by modifying the prototype, and merger is my merger node. The code that precedes it is:
merger = context.createChannelMerger(2);
userInput.connect(merger);
Where userInput is the stream from the user's microphone. That one connects without throwing an error. Sound is getting from the audioSource to the destination (I can hear it), so it doesn't seem like the splitter is necessarily wrong - I just can't seem to connect it.
Does anyone have any insight?
I was struggling to understand the ChannelSplitterNode and ChannelMergerNode API. Finally I find the missing part, the 2nd and 3rd optional parameters of the connect() method - input and output channel.
connect(destinationNode: AudioNode, output?: number, input?: number): AudioNode;
When using the connect() method with Splitter or Merger nodes, spacify the input/output channel. This is how you split and Merge to audio data.
You can see in this example how I load audio data, split it into 2 channels, and control the left/right output. Notice the 2nd and 3rd parameter of the connect() method:
const audioUrl = "https://s3-us-west-2.amazonaws.com/s.cdpn.io/858/outfoxing.mp3";
const audioElement = new Audio(audioUrl);
audioElement.crossOrigin = "anonymous"; // cross-origin - if file is stored on remote server
const audioContext = new AudioContext();
const audioSource = audioContext.createMediaElementSource(audioElement);
const volumeNodeL = new GainNode(audioContext);
const volumeNodeR = new GainNode(audioContext);
volumeNodeL.gain.value = 2;
volumeNodeR.gain.value = 2;
const channelsCount = 2; // or read from: 'audioSource.channelCount'
const splitterNode = new ChannelSplitterNode(audioContext, { numberOfOutputs: channelsCount });
const mergerNode = new ChannelMergerNode(audioContext, { numberOfInputs: channelsCount });
audioSource.connect(splitterNode);
splitterNode.connect(volumeNodeL, 0); // connect OUTPUT channel 0
splitterNode.connect(volumeNodeR, 1); // connect OUTPUT channel 1
volumeNodeL.connect(mergerNode, 0, 0); // connect INPUT channel 0
volumeNodeR.connect(mergerNode, 0, 1); // connect INPUT channel 1
mergerNode.connect(audioContext.destination);
let isPlaying;
function playPause() {
// check if context is in suspended state (autoplay policy)
if (audioContext.state === 'suspended') {
audioContext.resume();
}
isPlaying = !isPlaying;
if (isPlaying) {
audioElement.play();
} else {
audioElement.pause();
}
}
function setBalance(val) {
volumeNodeL.gain.value = 1 - val;
volumeNodeR.gain.value = 1 + val;
}
<h3>Try using headphones</h3>
<button onclick="playPause()">play/pause</button>
<br><br>
<button onclick="setBalance(-1)">Left</button>
<button onclick="setBalance(0)">Center</button>
<button onclick="setBalance(+1)">Right</button>
P.S: The audio track isn't a real stereo track, but a left and right copy of the same Mono playback. You can try this example with a real stereo playback for a real balance effect.
Here's some working splitter/merger code that creates a ping-pong delay - that is, it sets up separate delays on the L and R channels of a stereo signal, and crosses over the feedback. This is from my input effects demo on webaudiodemos.appspot.com (code on github).
var merger = context.createChannelMerger(2);
var leftDelay = context.createDelayNode();
var rightDelay = context.createDelayNode();
var leftFeedback = audioContext.createGainNode();
var rightFeedback = audioContext.createGainNode();
var splitter = context.createChannelSplitter(2);
// Split the stereo signal.
splitter.connect( leftDelay, 0 );
// If the signal is dual copies of a mono signal, we don't want the right channel -
// it will just sound like a mono delay. If it was a real stereo signal, we do want
// it to just mirror the channels.
if (isTrueStereo)
splitter.connect( rightDelay, 1 );
leftDelay.delayTime.value = delayTime;
rightDelay.delayTime.value = delayTime;
leftFeedback.gain.value = feedback;
rightFeedback.gain.value = feedback;
// Connect the routing - left bounces to right, right bounces to left.
leftDelay.connect(leftFeedback);
leftFeedback.connect(rightDelay);
rightDelay.connect(rightFeedback);
rightFeedback.connect(leftDelay);
// Re-merge the two delay channels into stereo L/R
leftFeedback.connect(merger, 0, 0);
rightFeedback.connect(merger, 0, 1);
// Now connect your input to "splitter", and connect "merger" to your output destination.

Categories