i made this webapp to compose music, i wanted to add a feature to download the composition as .mp3/wav/whateverFileFormatPossible, i've been searching on how to do this for many times and always gave up as i couldn't find any examples on how to do it, only things i found were microphone recorders but i want to record the final audio destination of the website.
I play audio in this way:
const a_ctx = new(window.AudioContext || window.webkitAudioContext)()
function playAudio(buf){
const source = a_ctx.createBufferSource()
source.buffer = buf
source.playbackRate.value = pitchKey;
//Other code to modify the audio like adding reverb and changing volume
source.start(0)
}
where buf is the AudioBuffer.
To sum up, i want to record the whole window audio but can't come up with a way.
link to the whole website code on github
Maybe you could use the MediaStream Recording API (https://developer.mozilla.org/en-US/docs/Web/API/MediaStream_Recording_API):
The MediaStream Recording API, sometimes simply referred to as the Media Recording API or the MediaRecorder API, is closely affiliated with the Media Capture and Streams API and the WebRTC API. The MediaStream Recording API makes it possible to capture the data generated by a MediaStream or HTMLMediaElement object for analysis, processing, or saving to disk. It's also surprisingly easy to work with.
Also, you may take a look at this topic: new MediaRecorder(stream[, options]) stream can living modify?. It seems to discuss a related issue and might provide you with some insights.
The following code generates some random noise, applies some transform, plays it and creates an audio control, which allows the noise to be downloaded from the context menu via "Save audio as..." (I needed to change the extension of the saved file to .wav in order to play it.)
<html>
<head>
<script>
const context = new(window.AudioContext || window.webkitAudioContext)()
async function run()
{
var myArrayBuffer = context.createBuffer(2, context.sampleRate, context.sampleRate);
// Fill the buffer with white noise;
// just random values between -1.0 and 1.0
for (var channel = 0; channel < myArrayBuffer.numberOfChannels; channel++) {
// This gives us the actual array that contains the data
var nowBuffering = myArrayBuffer.getChannelData(channel);
for (var i = 0; i < myArrayBuffer.length; i++) {
// audio needs to be in [-1.0; 1.0]
nowBuffering[i] = Math.random() * 2 - 1;
}
}
playAudio(myArrayBuffer)
}
function playAudio(buf){
const streamNode = context.createMediaStreamDestination();
const stream = streamNode.stream;
const recorder = new MediaRecorder( stream );
const chunks = [];
recorder.ondataavailable = evt => chunks.push( evt.data );
recorder.onstop = evt => exportAudio( new Blob( chunks ) );
const source = context.createBufferSource()
source.onended = () => recorder.stop();
source.buffer = buf
source.playbackRate.value = 0.2
source.connect( streamNode );
source.connect(context.destination);
source.start(0)
recorder.start();
}
function exportAudio( blob ) {
const aud = new Audio( URL.createObjectURL( blob ) );
aud.controls = true;
document.body.prepend( aud );
}
</script>
</head>
<body onload="javascript:run()">
<input type="button" onclick="context.resume()" value="play"/>
</body>
</html>
Is this what you were looking for?
Related
I'm building a simple looper, to help me come to an understanding of the Web Audio API however struggling to to get a buffer source to play back the recorded audio.
The code has been simplified as much as possible however with annotation it's still 70+ lines, ommitting the CSS and HTML, so apologies for that. A version including the CSS and HTML can be found on JSFiddle:
https://jsfiddle.net/b5w9j4yk/10/
Any help would be much appreciated. Thank you :)
// Aim of the code is to record the input from the mike to a float32 array. then prass that to a buffer which is linked to a buffer source, so the audio can be played back.
// Grab DOM Elements
const playButton = document.getElementById('play');
const recordButton = document.getElementById('record');
// If allowed access to microphone run this code
const promise = navigator.mediaDevices.getUserMedia({audio: true, video: false})
.then((stream) => {
recordButton.addEventListener('click', () => {
// when the record button is pressed clear enstanciate the record buffer
if (!recordArmed) {
recordArmed = true;
recordButton.classList.add('on');
console.log('recording armed')
recordBuffer = new Float32Array(audioCtx.sampleRate * 10);
}
else {
recordArmed = false;
recordButton.classList.remove('on');
// After the recording has stopped pass the recordBuffer the source's buffer
myArrayBuffer.copyToChannel(recordBuffer, 0);
//Looks like the buffer has been passed
console.log(myArrayBuffer.getChannelData(0));
}
});
// this should stat the playback of the source intended to be used adter the audio has been recorded, I can't get it to work in this given context
playButton.addEventListener('click', () => {
playButton.classList.add('on');
source.start();
});
//Transport variables
let recordArmed = false;
let playing = false;
// this buffer will later be assigned a Float 32 Array / I'd like to keep this intimediate buffer so the audio can be sliced and minipulated with ease later
let recordBuffer;
// Declear Context, input source and a block processor to pass the input sorce to the recordBuffer
const audioCtx = new AudioContext();
const audioIn = audioCtx.createMediaStreamSource(stream);
const processor = audioCtx.createScriptProcessor(512, 1, 1);
// Create a source and corrisponding buffer for playback and then assign link
const myArrayBuffer = audioCtx.createBuffer(1, audioCtx.sampleRate * 10, audioCtx.sampleRate);
const source = audioCtx.createBufferSource();
source.buffer = myArrayBuffer;
// Audio Routing
audioIn.connect(processor);
source.connect(audioCtx.destination);
// When recording is armed pass the samples of the block one at a time to the record buffer
processor.onaudioprocess = ((audioProcessingEvent) => {
let inputBuffer = audioProcessingEvent.inputBuffer;
let i = 0;
if (recordArmed) {
for (let channel = 0; channel < inputBuffer.numberOfChannels; channel++) {
let inputData = inputBuffer.getChannelData(channel);
let avg = 0;
inputData.forEach(sample => {
recordBuffer.set([sample], i);
i++;
});
}
}
else {
i = 0;
}
});
})
I'm working on a project which requires the ability to stream audio from a webpage to other clients. I'm already using websocket and would like to channel the data there.
My current approach uses Media Recorder, but there is a problem with sampling which causes interrupts. It registers 1s audio and then send's it to the server which relays it to other clients. Is there a way to capture a continuous audio stream and transform it to base64?
Maybe if there is a way to create a base64 audio from MediaStream without delay it would solve the problem. What do you think?
I would like to keep using websockets, I know there is webrtc.
Have you ever done something like this, is this doable?
--> Device 1
MediaStream -> MediaRecorder -> base64 -> WebSocket -> Server --> Device ..
--> Device 18
Here a demo of the current approach... you can try it here: https://jsfiddle.net/8qhvrcbz/
var sendAudio = function(b64) {
var message = 'var audio = document.createElement(\'audio\');';
message += 'audio.src = "' + b64 + '";';
message += 'audio.play().catch(console.error);';
eval(message);
console.log(b64);
}
navigator.mediaDevices.getUserMedia({
audio: true
}).then(function(stream) {
setInterval(function() {
var chunks = [];
var recorder = new MediaRecorder(stream);
recorder.ondataavailable = function(e) {
chunks.push(e.data);
};
recorder.onstop = function(e) {
var audioBlob = new Blob(chunks);
var reader = new FileReader();
reader.readAsDataURL(audioBlob);
reader.onloadend = function() {
var b64 = reader.result
b64 = b64.replace('application/octet-stream', 'audio/mpeg');
sendAudio(b64);
}
}
recorder.start();
setTimeout(function() {
recorder.stop();
}, 1050);
}, 1000);
});
Websocket is not the best. I solved by using WebRTC instead of websocket.
The solution with websocket was obtained while recording 1050ms instead of 1000, it causes a bit of overlay but still better than hearing blanks.
Although you have solved this through WebRTC, which is the industry recommended approach, I'd like to share my answer on this.
The problem here is not websockets in general but rather the MediaRecorder API. Instead of using it one can use PCM audio capture and then submit the captured array buffers into a web worker or WASM for encoding to MP3 chunks or similar.
const context = new AudioContext();
let leftChannel = [];
let rightChannel = [];
let recordingLength = null;
let bufferSize = 512;
let sampleRate = context.sampleRate;
const audioSource = context.createMediaStreamSource(audioStream);
const scriptNode = context.createScriptProcessor(bufferSize, 1, 1);
audioSource.connect(scriptNode);
scriptNode.connect(context.destination);
scriptNode.onaudioprocess = function(e) {
// Do something with the data, e.g. convert it to WAV or MP3
};
Based on my experiments this would give you "real-time" audio. My theory with the MediaRecorder API is that it does some buffering first before emitting out anything that causes the observable delay.
Alright, so I'm trying to determine the intensity (in dB) on samples of an audio file which is recorded by the user's browser.
I have been able to record it and play it through an HTML element.
But when I try to use this element as a source and connect it to an AnalyserNode, AnalyserNode.getFloatFrequencyData always returns an array full of -Infinity, getByteFrequencyData always returns zeroes, getByteTimeDomainData is full of 128.
Here's my code:
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var source;
var analyser = audioCtx.createAnalyser();
var bufferLength = analyser.frequencyBinCount;
var data = new Float32Array(bufferLength);
mediaRecorder.onstop = function(e) {
var blob = new Blob(chunks, { 'type' : 'audio/ogg; codecs=opus' });
chunks = [];
var audioURL = window.URL.createObjectURL(blob);
// audio is an HTML audio element
audio.src = audioURL;
audio.addEventListener("canplaythrough", function() {
source = audioCtx.createMediaElementSource(audio);
source.connect(analyser);
analyser.connect(audioCtx.destination);
analyser.getFloatFrequencyData(data);
console.log(data);
});
}
Any idea why the AnalyserNode behaves like the source is empty/mute? I also tried to put the stream as source while recording, with the same result.
I ran into the same issue, thanks to some of your code snippets, I made it work on my end (the code bellow is typescript and will not work in the browser at the moment of writing):
audioCtx.decodeAudioData(this.result as ArrayBuffer).then(function (buffer: AudioBuffer) {
soundSource = audioCtx.createBufferSource();
soundSource.buffer = buffer;
//soundSource.connect(audioCtx.destination); //I do not need to play the sound
soundSource.connect(analyser);
soundSource.start(0);
setInterval(() => {
calc(); //In here, I will get the analyzed data with analyser.getFloatFrequencyData
}, 300); //This can be changed to 0.
// The interval helps with making sure the buffer has the data
Some explanation (I'm still a beginner when it comes to the Web Audio API, so my explanation might be wrong or incomplete):
An analyzer needs to be able to analyze a specific part of your sound file. In this case I create a AudioBufferSoundNode that contains the buffer that I got from decoding the audio data. I feed the buffer to the source, which eventually will be able to be copied inside the Analyzer. However, without the interval callback, the buffer never seems to be ready and the analysed data contains -Inifinity (which I assume is the absence of any sound, as it has nothing to read) at every index of the array. Which is why the interval is there. It analyses the data every 300ms.
Hope this helps someone!
You need to fetch the audio file and decode the audio buffer.
The url to the audio source must also be on the same domain or have have the correct CORS headers as well (as mentioned by Anthony).
Note: Replace <FILE-URI> with the path to your file in the example below.
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var source;
var analyser = audioCtx.createAnalyser();
var button = document.querySelector('button');
var freqs;
var times;
button.addEventListener('click', (e) => {
fetch("<FILE-URI>", {
headers: new Headers({
"Content-Type" : "audio/mpeg"
})
}).then(function(response){
return response.arrayBuffer()
}).then((ab) => {
audioCtx.decodeAudioData(ab, (buffer) => {
source = audioCtx.createBufferSource();
source.connect(audioCtx.destination)
source.connect(analyser);
source.buffer = buffer;
source.start(0);
viewBufferData();
});
});
});
// Watch the changes in the audio buffer
function viewBufferData() {
setInterval(function(){
freqs = new Uint8Array(analyser.frequencyBinCount);
times = new Uint8Array(analyser.frequencyBinCount);
analyser.smoothingTimeConstant = 0.8;
analyser.fftSize = 2048;
analyser.getByteFrequencyData(freqs);
analyser.getByteTimeDomainData(times);
console.log(freqs)
console.log(times)
}, 1000)
}
If the source file from a different domain? That would fail in createMediaElementSource.
I am recording browser audio input from the microphone, and sending it via websocket to a nodeJs service that writes the stream to a .wav file.
My problem is that the first recording comes out fine, but any subsequent recordings come out sounding very slow, about half the speed and are therefore unusable.
If I refresh the browser the first recording works again, and subsequent recordings are slowed down which is why I am sure the problem is not in the nodeJs service.
My project is an Angular 5 project.
I have pasted the code I am trying below.
I am using binary.js ->
https://cdn.jsdelivr.net/binaryjs/0.2.1/binary.min.js
this.client = BinaryClient(`ws://localhost:9001`)
createStream() {
window.Stream = this.client.createStream();
window.navigator.mediaDevices.getUserMedia({ audio: true }).then(stream => {
this.success(stream);
})
}
stopRecording() {
this.recording = false;
this.win.Stream.end();
}
success(e) {
var audioContext = window.AudioContext || window.webkitAudioContext;
var context = new audioContext();
// the sample rate is in context.sampleRate
var audioInput = context.createMediaStreamSource(e);
var bufferSize = 2048;
var recorder = context.createScriptProcessor(bufferSize, 1, 1);
}
recorder.onaudioprocess = (e) => {
if (!this.recording) return;
console.log('recording');
var left = e.inputBuffer.getChannelData(0);
this.win.Stream.write(this.convertoFloat32ToInt16(left));
}
audioInput.connect(recorder)
recorder.connect(context.destination);
}
convertoFloat32ToInt16(buffer) {
var l = buffer.length;
var buf = new Int16Array(l)
while (l--) {
buf[l] = buffer[l] * 0xFFFF; //convert to 16 bit
}
return buf.buffer
}
I am stumped as to what can be going wrong so if anyone has experience using this browser tech I would appreciate any help.
Thanks.
I've had this exact problem - your problem is the sample rate you are writing your WAV file with is incorrect.
You need to pass the sample rate used by the browser and the microphone to the node.js which writes the binary WAV file.
Client side:
After a successfull navigator.mediaDevices.getUserMedia (in your case, success function), get the sampleRate variable from the AudioContext element:
var _smapleRate = context.sampleRate;
Then pass it to the node.js listener as a parameter. In my case I used:
binaryClient.createStream({ SampleRate: _smapleRate });
Server (Node.js) side:
Use the passed SampleRate to set the WAV file's sample rate. In my case this is the code:
fileWriter = new wav.FileWriter(wavPath, {
channels: 1,
sampleRate: meta.SampleRate,
bitDepth: 16
});
This will prevent broken sounds, low pitch sounds, low or fast WAV files.
Hope this helps.
I'm currently playing around with the Web Audio API in Chrome (60.0.3112.90) to possibly build a sound wave of a given file via FilerReader, AudioContext, createScriptProcessor, and createAnalyser. I have the following code:
const visualize = analyser => {
analyser.fftSize = 256;
let bufferLength = analyser.frequencyBinCount;
let dataArray = new Float32Array(bufferLength);
analyser.getFloatFrequencyData(dataArray);
}
loadAudio(file){
// creating FileReader to convert audio file to an ArrayBuffer
const fileReader = new FileReader();
navigator.getUserMedia = (navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia);
fileReader.addEventListener('loadend', () => {
const fileArrayBuffer = fileReader.result;
let audioCtx = new (window.AudioContext || window.webkitAudioContext)();
let processor = audioCtx.createScriptProcessor(4096, 1, 1);
let analyser = audioCtx.createAnalyser();
analyser.connect(processor);
let data = new Float32Array(analyser.frequencyBinCount);
let soundBuffer;
let soundSource = audioCtx.createBufferSource();
// loading audio track into buffer
audioCtx.decodeAudioData(
fileArrayBuffer,
buffer => {
soundBuffer = buffer;
soundSource.buffer = soundBuffer;
soundSource.connect(analyser);
soundSource.connect(audioCtx.destination);
processor.onaudioprocess = () => {
// data becomes array of -Infinity values after call below
analyser.getFloatFrequencyData(data);
};
visuaulize(analyser);
},
error => 'error with decoding audio data: ' + error.err
);
});
fileReader.readAsArrayBuffer(file);
}
Upon loading a file, I get all the way to analyser.getFloatFrequencyData(data). Upon reading the Web audio API docs, it says that the parameter is:
The Float32Array that the frequency domain data will be copied to.
For any sample which is silent, the value is -Infinity.
In my case, I have both an mp3 and wav file I'm using to test this and after invoking analyser.getFloatFrequency(data), both files end up giving me data which becomes an array of `-Infinity' values.
This may be due to my ignorance with Web Audio's API, but my question is why are both files, which contain loud audio, giving me an array that represents silent samples?
The Web Audio AnalyserNode is only designed to work in realtime. (It used to be called RealtimeAnalyser.) Web Audio doesn't have the ability to do analysis on buffers; take a look at another library, like DSP.js.
Instead of:
soundSource.connect(analyser);
soundSource.connect(audioCtx.destination);
try:
soundSource.connect(analyser);
analyser.connect(audioCtx.destination);
Realising I sould do a source ==> anlalsyser ==>> destination chain solved this problem when I encountered it.