Struggling to playback a Float 32 Array (Web Audio API) - javascript

I'm building a simple looper, to help me come to an understanding of the Web Audio API however struggling to to get a buffer source to play back the recorded audio.
The code has been simplified as much as possible however with annotation it's still 70+ lines, ommitting the CSS and HTML, so apologies for that. A version including the CSS and HTML can be found on JSFiddle:
https://jsfiddle.net/b5w9j4yk/10/
Any help would be much appreciated. Thank you :)
// Aim of the code is to record the input from the mike to a float32 array. then prass that to a buffer which is linked to a buffer source, so the audio can be played back.
// Grab DOM Elements
const playButton = document.getElementById('play');
const recordButton = document.getElementById('record');
// If allowed access to microphone run this code
const promise = navigator.mediaDevices.getUserMedia({audio: true, video: false})
.then((stream) => {
recordButton.addEventListener('click', () => {
// when the record button is pressed clear enstanciate the record buffer
if (!recordArmed) {
recordArmed = true;
recordButton.classList.add('on');
console.log('recording armed')
recordBuffer = new Float32Array(audioCtx.sampleRate * 10);
}
else {
recordArmed = false;
recordButton.classList.remove('on');
// After the recording has stopped pass the recordBuffer the source's buffer
myArrayBuffer.copyToChannel(recordBuffer, 0);
//Looks like the buffer has been passed
console.log(myArrayBuffer.getChannelData(0));
}
});
// this should stat the playback of the source intended to be used adter the audio has been recorded, I can't get it to work in this given context
playButton.addEventListener('click', () => {
playButton.classList.add('on');
source.start();
});
//Transport variables
let recordArmed = false;
let playing = false;
// this buffer will later be assigned a Float 32 Array / I'd like to keep this intimediate buffer so the audio can be sliced and minipulated with ease later
let recordBuffer;
// Declear Context, input source and a block processor to pass the input sorce to the recordBuffer
const audioCtx = new AudioContext();
const audioIn = audioCtx.createMediaStreamSource(stream);
const processor = audioCtx.createScriptProcessor(512, 1, 1);
// Create a source and corrisponding buffer for playback and then assign link
const myArrayBuffer = audioCtx.createBuffer(1, audioCtx.sampleRate * 10, audioCtx.sampleRate);
const source = audioCtx.createBufferSource();
source.buffer = myArrayBuffer;
// Audio Routing
audioIn.connect(processor);
source.connect(audioCtx.destination);
// When recording is armed pass the samples of the block one at a time to the record buffer
processor.onaudioprocess = ((audioProcessingEvent) => {
let inputBuffer = audioProcessingEvent.inputBuffer;
let i = 0;
if (recordArmed) {
for (let channel = 0; channel < inputBuffer.numberOfChannels; channel++) {
let inputData = inputBuffer.getChannelData(channel);
let avg = 0;
inputData.forEach(sample => {
recordBuffer.set([sample], i);
i++;
});
}
}
else {
i = 0;
}
});
})

Related

How do I create a video that has seek-able timestamps from an unknown number of incoming video blobs/chunks, using ts-ebml (on-the-fly)?

I am creating a live stream component that utilizes the videojs-record component. Every x amount of milliseconds, the component triggers an event that returns a blob. As seen, the blob contains data from the video recording. It's not the full recording but a piece, for this got returned x seconds into the recording
After saving it in the backend and playing it back, I find that I am unable to skip through the video; it's not seek-able.
Because this is a task that I'm trying to keep in the frontend, I have to inject this metadata within the browser using ts-ebml. After injecting the metadata, the modified blob is sent to the backend.
The function that receives this blob looks as follows:
timestampHandler(player) {
const { length: recordedDataLength } = player.recordedData;
if (recordedDataLength != 0) {
const { convertStream } = this.converter;
convertStream(player.recordedData[recordedDataLength - 1]).then((blob) => {
console.log(blob);
blob.arrayBuffer().then(async response => {
const bytes = new Uint8Array(response);
let binary = '';
let len = bytes.byteLength;
for (let i = 0; i < len; i++) {
binary += String.fromCharCode(bytes[i]);
}
this.$backend.videoDataSendToServer({ bytes: window.btoa(binary), id: this.videoId })
})
.catch(error => {
console.log('Error Converting:\t', error);
})
})
}
}
convertStream is a function located in a class called TsEBMLEngine. This class looks as follows:
import videojs from "video.js/dist/video";
import { Buffer } from "buffer";
window.Buffer = Buffer;
import { Decoder, tools, Reader } from "ts-ebml";
class TsEBMLEngine {
//constructor(){
//this.chunkDecoder = new Decoder();
//this.chunkReader = new Reader();
//}
convertStream = (data) => {
const chunkDecoder = new Decoder();
const chunkReader = new Reader();
chunkReader.logging = false;
chunkReader.drop_default_duration = false;
// save timestamp
const timestamp = new Date();
timestamp.setTime(data.lastModified);
// load and convert blob
return data.arrayBuffer().then((buffer) => {
// decode
const elms = chunkDecoder.decode(buffer);
elms.forEach((elm) => {
chunkReader.read(elm);
});
chunkReader.stop();
// generate metadata
let refinedMetadataBuf = tools.makeMetadataSeekable(
chunkReader.metadatas,
chunkReader.duration,
chunkReader.cues
);
let body = buffer.slice(chunkReader.metadataSize);
// create new blob
let convertedData = new Blob([refinedMetadataBuf, body], { type: data.type });
// store convertedData
return convertedData;
});
}
}
// expose plugin
videojs.TsEBMLEngine = TsEBMLEngine;
export default TsEBMLEngine;
After recording for more than 10 seconds I stop the recording, go to the DB, and watch the retrieved video. The video is seek-able for the first 3 seconds before the dot reaches the very end of the seek-able line. When I'm watching the video in a live stream, the video freezes after the first 3 seconds.
When I look at the size of the file in the DB, it increases after x seconds which means it's being appended to it, just not properly.
Any help would be greatly appreciated.
For being seekable a video (at least talking about EBMLs) needs to have a SeekHead tag, Cues tags and defined duration in Info tag.
For creating new metadata of the video you can use ts-ebml's exporting function makeMetadataSeekable
Then slice the beginning of video and replace it with new metadata like it was done in the example :
const decoder = new Decoder();
const reader = new Reader();
const webMBuf = await fetch("path/to/file").then(res=> res.arrayBuffer());
const elms = decoder.decode(webMBuf);
elms.forEach((elm)=>{ reader.read(elm); });
reader.stop();
const refinedMetadataBuf = tools.makeMetadataSeekable(reader.metadatas, reader.duration, reader.cues);
const body = webMBuf.slice(reader.metadataSize);
const refinedWebM = new Blob([refinedMetadataBuf, body], {type: "video/webm"});
And voila! new video file becomes seekable

Record internal audio of a website via javascript

i made this webapp to compose music, i wanted to add a feature to download the composition as .mp3/wav/whateverFileFormatPossible, i've been searching on how to do this for many times and always gave up as i couldn't find any examples on how to do it, only things i found were microphone recorders but i want to record the final audio destination of the website.
I play audio in this way:
const a_ctx = new(window.AudioContext || window.webkitAudioContext)()
function playAudio(buf){
const source = a_ctx.createBufferSource()
source.buffer = buf
source.playbackRate.value = pitchKey;
//Other code to modify the audio like adding reverb and changing volume
source.start(0)
}
where buf is the AudioBuffer.
To sum up, i want to record the whole window audio but can't come up with a way.
link to the whole website code on github
Maybe you could use the MediaStream Recording API (https://developer.mozilla.org/en-US/docs/Web/API/MediaStream_Recording_API):
The MediaStream Recording API, sometimes simply referred to as the Media Recording API or the MediaRecorder API, is closely affiliated with the Media Capture and Streams API and the WebRTC API. The MediaStream Recording API makes it possible to capture the data generated by a MediaStream or HTMLMediaElement object for analysis, processing, or saving to disk. It's also surprisingly easy to work with.
Also, you may take a look at this topic: new MediaRecorder(stream[, options]) stream can living modify?. It seems to discuss a related issue and might provide you with some insights.
The following code generates some random noise, applies some transform, plays it and creates an audio control, which allows the noise to be downloaded from the context menu via "Save audio as..." (I needed to change the extension of the saved file to .wav in order to play it.)
<html>
<head>
<script>
const context = new(window.AudioContext || window.webkitAudioContext)()
async function run()
{
var myArrayBuffer = context.createBuffer(2, context.sampleRate, context.sampleRate);
// Fill the buffer with white noise;
// just random values between -1.0 and 1.0
for (var channel = 0; channel < myArrayBuffer.numberOfChannels; channel++) {
// This gives us the actual array that contains the data
var nowBuffering = myArrayBuffer.getChannelData(channel);
for (var i = 0; i < myArrayBuffer.length; i++) {
// audio needs to be in [-1.0; 1.0]
nowBuffering[i] = Math.random() * 2 - 1;
}
}
playAudio(myArrayBuffer)
}
function playAudio(buf){
const streamNode = context.createMediaStreamDestination();
const stream = streamNode.stream;
const recorder = new MediaRecorder( stream );
const chunks = [];
recorder.ondataavailable = evt => chunks.push( evt.data );
recorder.onstop = evt => exportAudio( new Blob( chunks ) );
const source = context.createBufferSource()
source.onended = () => recorder.stop();
source.buffer = buf
source.playbackRate.value = 0.2
source.connect( streamNode );
source.connect(context.destination);
source.start(0)
recorder.start();
}
function exportAudio( blob ) {
const aud = new Audio( URL.createObjectURL( blob ) );
aud.controls = true;
document.body.prepend( aud );
}
</script>
</head>
<body onload="javascript:run()">
<input type="button" onclick="context.resume()" value="play"/>
</body>
</html>
Is this what you were looking for?

How do I record AND download / upload the webcam stream on server (javascript) WHILE using the webcam input for facial recognition (opencv4nodejs)?

I had to write a program for facial recognition in JavaScript , for which I used the opencv4nodejs API , since there's NOT many working examples ; Now I somehow want to record and save the stream (for saving on the client-side or uploading on the server) alongwith the audio. This is where I am stuck. Any help is appreciated.
In simple words I need to use the Webcam input for multiple purposes , one for facial recognition and two to somehow save , latter is what i'm unable to do. Also in the worst case, If it's not possible Instead of recording and saving the webcam video I could also save the Complete Screen recording , Please Answer if there's a workaround to this.
Below is what i tried to do, But it doesn't work for obvious reasons.
$(document).ready(function () {
run1()
})
let chunks = []
// run1() for uploading model and for facecam
async function run1() {
const MODELS = "/models";
await faceapi.loadSsdMobilenetv1Model(MODELS)
await faceapi.loadFaceLandmarkModel(MODELS)
await faceapi.loadFaceRecognitionModel(MODELS)
var _stream
//Accessing the user webcam
const videoEl = document.getElementById('inputVideo')
navigator.mediaDevices.getUserMedia({
video: true,
audio: true
}).then(
(stream) => {
_stream = stream
recorder = new MediaRecorder(_stream);
recorder.ondataavailable = (e) => {
chunks.push(e.data);
console.log(chunks, i);
if (i == 20) makeLink(); //Trying to make Link from the blob for some i==20
};
videoEl.srcObject = stream
},
(err) => {
console.error(err)
}
)
}
// run2() main recognition code and training
async function run2() {
// wait for the results of mtcnn ,
const input = document.getElementById('inputVideo')
const mtcnnResults = await faceapi.ssdMobilenetv1(input)
// Detect All the faces in the webcam
const fullFaceDescriptions = await faceapi.detectAllFaces(input).withFaceLandmarks().withFaceDescriptors()
// Training the algorithm with given data of the Current Student
const labeledFaceDescriptors = await Promise.all(
CurrentStudent.map(
async function (label) {
// Training the Algorithm with the current students
for (let i = 1; i <= 10; i++) {
// console.log(label);
const imgUrl = `http://localhost:5500/StudentData/${label}/${i}.jpg`
const img = await faceapi.fetchImage(imgUrl)
// detect the face with the highest score in the image and compute it's landmarks and face descriptor
const fullFaceDescription = await faceapi.detectSingleFace(img).withFaceLandmarks().withFaceDescriptor()
if (!fullFaceDescription) {
throw new Error(`no faces detected for ${label}`)
}
const faceDescriptors = [fullFaceDescription.descriptor]
return new faceapi.LabeledFaceDescriptors(label, faceDescriptors)
}
}
)
)
const maxDescriptorDistance = 0.65
const faceMatcher = new faceapi.FaceMatcher(labeledFaceDescriptors, maxDescriptorDistance)
const results = fullFaceDescriptions.map(fd => faceMatcher.findBestMatch(fd.descriptor))
i++;
}
// I somehow want this to work
function makeLink() {
alert("ML")
console.log("IN MAKE LINK");
let blob = new Blob(chunks, {
type: media.type
}),
url = URL.createObjectURL(blob),
li = document.createElement('li'),
mt = document.createElement(media.tag),
hf = document.createElement('a');
mt.controls = true;
mt.src = url;
hf.href = url;
hf.download = `${counter++}${media.ext}`;
hf.innerHTML = `donwload ${hf.download}`;
li.appendChild(mt);
li.appendChild(hf);
ul.appendChild(li);
}
// onPlay(video) function
async function onPlay(videoEl) {
run2()
setTimeout(() => onPlay(videoEl), 50)
}
I'm not familiar with JavaScript. But in general only one program may communicate with the camera. You will probably need to write a server which will read the data from the camera. Then the server will send the data to your facial recognition, recording, etc.

How can I play an arrayBuffer as an audio file?

I am receiving an arrayBuffer via a socket.io event and want to be able to process and play the stream as an audio file.
I am receiving the buffer like so:
retrieveAudioStream = () => {
this.socket.on('stream', (arrayBuffer) => {
console.log('arrayBuffer', arrayBuffer)
})
}
Is it possible to set the src attribute of an <audio/> element to a buffer? If not how can I play the the incoming buffer stream?
edit:
To show how I am getting my audio input and streaming it:
window.navigator.getUserMedia(constraints, this.initializeRecorder, this.handleError);
initializeRecorder = (stream) => {
const audioContext = window.AudioContext;
const context = new audioContext();
const audioInput = context.createMediaStreamSource(stream);
const bufferSize = 2048;
// create a javascript node
const recorder = context.createScriptProcessor(bufferSize, 1, 1);
// specify the processing function
recorder.onaudioprocess = this.recorderProcess;
// connect stream to our recorder
audioInput.connect(recorder);
// connect our recorder to the previous destination
recorder.connect(context.destination);
}
This is where I receive the inputBuffer event and stream via a socket.io event
recorderProcess = (e) => {
const left = e.inputBuffer.getChannelData(0);
this.socket.emit('stream', this.convertFloat32ToInt16(left))
}
EDIT 2:
Adding Raymonds suggestion:
retrieveAudioStream = () => {
const audioContext = new window.AudioContext();
this.socket.on('stream', (buffer) => {
const b = audioContext.createBuffer(1, buffer.length, audioContext.sampleRate);
b.copyToChannel(buffer, 0, 0)
const s = audioContext.createBufferSource();
s.buffer = b
})
}
Getting error: NotSupportedError: Failed to execute 'createBuffer' on 'BaseAudioContext': The number of frames provided (0) is less than or equal to the minimum bound (0).
Based on a quick read of what initializeRecorder and recorderProcess do, it looks like you're converting the float32 samples to int16 in some say and that gets sent to retrieveAudioStream in some way.
If this is correct, then the arrayBuffer is an array of int16 values. Convert them to float32 (most likely by dividing each value by 32768) and save them in a Float32Array. Then create an AudioBuffer of the same lenght and copyToChannel(float32Array, 0, 0) to write the values to the AudioBuffer. Use an AudioBufferSourceNode with this buffer to play out the audio.

HTML 5: AudioContext AudioBuffer

I need to understand how audio buffer works and to do it I want to make the following sequence: Microphone-> Auto-> Processor-> Manual-> Buffer-> Auto-> Speakers. Auto means auto data transfer and manual I do myself via the code in processor.onaudioprocess. So I have the following code:
navigator.getUserMedia = navigator.getUserMedia ||navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
var audioContext;
var myAudioBuffer;
var microphone;
var speakers;
if (navigator.getUserMedia) {
navigator.getUserMedia(
{audio: true},
function(stream) {
audioContext = new AudioContext();
//STEP 1 - we create buffer and its node
speakers = audioContext.destination;
myAudioBuffer = audioContext.createBuffer(1, 22050, 44100);
var bufferNode = audioContext.createBufferSource();
bufferNode.buffer = myAudioBuffer;
bufferNode.connect(speakers);
bufferNode.start();
//STEP 2- we create microphone and processor
microphone = audioContext.createMediaStreamSource(stream);
var processor = (microphone.context.createScriptProcessor ||
microphone.context.createJavaScriptNode).call(microphone.context,4096, 1, 1);
processor.onaudioprocess = function(audioProcessingEvent) {
var inputBuffer = audioProcessingEvent.inputBuffer;
var inputData = inputBuffer.getChannelData(0); // we have only one channel
var nowBuffering = myAudioBuffer.getChannelData(0);
for (var sample = 0; sample < inputBuffer.length; sample++) {
nowBuffering[sample] = inputData[sample];
}
}
microphone.connect(processor);
},
function() {
console.log("Error 003.")
});
}
However, this code doesn't work. No errors, only silence. Where is my mistake?
EDIT
So since the OP definitely wants to use a buffer. I wrote some more code which you can try out on JSFiddle. The trick part definitely was that you somehow have to pass the input from the microphone through to some "destination" to get it to process.
navigator.getUserMedia = navigator.getUserMedia ||
navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
// TODO: Figure out what else we need and give the user feedback if he doesn't
// support microphone input.
if (navigator.getUserMedia) {
captureMicrophone();
}
// First Step - Capture microphone and process the input
function captureMicrophone() {
// process input from microphone
const processAudio = ev =>
processBuffer(ev.inputBuffer.getChannelData(CHANNEL));
// setup media stream from microphone
const microphoneStream = stream => {
const microphone = audioContext.createMediaStreamSource(stream);
microphone.connect(processor);
// #1 If we don't pass through to speakers 'audioprocess' won't be triggerd
processor.connect(mute);
};
// TODO: Handle error properly (see todo above - but probably more specific)
const userMediaError = err => console.error(err);
// Second step - Process buffer and output to speakers
const processBuffer = buffer => {
audioBuffer.getChannelData(CHANNEL).set(buffer);
// We could move this out but that would affect audio quality
const source = audioContext.createBufferSource();
source.buffer = audioBuffer;
source.connect(speakers);
source.start();
}
const audioContext = new AudioContext();
const speakers = audioContext.destination;
// We currently only operate on this channel we might need to add a couple
// lines of code if this fact changes
const CHANNEL = 0;
const CHANNELS = 1;
const BUFFER_SIZE = 4096;
const audioBuffer = audioContext.createBuffer(CHANNELS, BUFFER_SIZE, audioContext.sampleRate);
const processor = audioContext.createScriptProcessor(BUFFER_SIZE, CHANNELS, CHANNELS);
// #2 Not needed we could directly pass through to speakers since there's no
// data anyway but just to be sure that we don't output anything
const mute = audioContext.createGain();
mute.gain.value = 0;
mute.connect(speakers);
processor.addEventListener('audioprocess', processAudio);
navigator.getUserMedia({audio: true}, microphoneStream, userMediaError);
}
// #2 Not needed we could directly pass through to speakers since there's no
// data anyway but just to be sure that we don't output anything
const mute = audioContext.createGain();
mute.gain.value = 0;
mute.connect(speakers);
processor.addEventListener('audioprocess', processAudio);
navigator.getUserMedia({audio: true}, microphoneStream, userMediaError);
}
The code I wrote up there looks quite dirty to me. But since you have a large project you can definitely structure it much more cleanly.
I've no clue what you're trying to achieve but I definitely also recommend to have a look at Recorder.js
Previous answer
The main point you're missing is that you'll get an output buffer passed into createScriptProcessor so all the createBuffer stuff you do is unnecessary. Apart from that you're on the right track.
This would be a working solution. Try it out on JSFiddle!
navigator.getUserMedia = navigator.getUserMedia ||
navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
if (navigator.getUserMedia) {
captureMicrophone();
}
function captureMicrophone() {
const audioContext = new AudioContext();
const speaker = audioContext.destination;
const processor = audioContext.createScriptProcessor(4096, 1, 1);
const processAudio =
ev => {
const CHANNEL = 0;
const inputBuffer = ev.inputBuffer;
const outputBuffer = ev.outputBuffer;
const inputData = inputBuffer.getChannelData(CHANNEL);
const outputData = outputBuffer.getChannelData(CHANNEL);
// TODO: manually do something with the audio
for (let i = 0; i < inputBuffer.length; ++i) {
outputData[i] = inputData[i];
}
};
const microphoneStream =
stream => {
const microphone = audioContext.createMediaStreamSource(stream);
microphone.connect(processor);
processor.connect(speaker);
};
// TODO: handle error properly
const userMediaError = err => console.error(err);
processor.addEventListener('audioprocess', processAudio);
navigator.getUserMedia({audio: true}, microphoneStream, userMediaError);
}
Are you getting silence (i.e. your onprocess is getting called, but the buffers are empty) or nothing (i.e. your onprocess is never getting called)?
If the latter, try connecting the scriptprocessor to the context.destination. Even if you don't use the output, some implementations currently need that connection to pull data through.

Categories