Is there a way to force .pipe on a stream to write to a file every certain time/size?
Basically, I am using socket io stream, from a browser I am sending a buffer with audio and I send with emit:
Browser
c.onaudioprocess = function(o)
{
var input = o.inputBuffer.getChannelData(0);
stream1.write( new ss.Buffer( convertFloat32ToInt16( input ) ));
}
Server (nodejs)
var fileWriter = new wav.FileWriter('/tmp/demo.wav', {
channels: 1,
sampleRate: 44100,
bitDepth: 16
});
ss(socket).on('client-stream-request', function(stream)
{
stream.pipe(fileWriter);
}
The problem I have is that the file demo.wav will be only wrote when I finish the stream, so when I stop the microphone. But I would like it to write always, as I will be doing speech recognition using google, any ideas? If I call the speech recognition from google using pipe, the chuncks are too small and google is not able to recognize it.
Looking over the Node stream API it looks like you should be able to add an options parameter to the pipe function.
Try
stream.pipe(fileWriter, { end: false });
Related
I am trying to record and upload audio from javascript. I can successfullly record audio Blobs from a MediaRecorder. My understanding is that after recording several chunks into blobs, I would concatenate them as a new Blob(audioBlobs) and upload that. Unfortunately, the result on the server-side keeps being more or less gibberish. I'm currently running a localhost connection, so converting to uncompressed WAV isn't a problem (might be come one later, but that's a separate issue). Here is what I have so far
navigator.mediaDevices.getUserMedia({audio: true, video: false})
.then(stream => {
const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.start(1000);
const audioChunks = [];
mediaRecorder.addEventListener("dataavailable", event => {
audioChunks.push(event.data);
});
function sendData () {
const audioBlob = new Blob(audioChunks);
session.call('my.app.method', [XXXXXX see below XXXXXX])
}
})
The session object here is an autobahn.js websockets connection to a python server (using soundfile. I tried a number of arguments in the place that was labelled by XXXXX in the code.
Just pass the audioBlob. In that case, the python side just receives an empty dictionary.
Pass audioBlob.text() in that case, I get something that looks somewhat binary (starts with OggS), but it can't be decoded.
Pass audioBlob.arrayBuffer(). In that case the python side receives an empty dictionary.
A possible solution could be to convert the data to WAV on the serverside (just changing the mime-type on the blob doesn't work) or to find a way to interpret the .text() output on the server side.
The solution was to use recorder.js and then use the getBuffer method in there to get the wave data as a Float32Array.
I am trying to do some audio analysis for a visualizer running on my computer.
Is is possible to access the output audio data stream directly from the browser?
Currently running JavaScript with the three.js and meyda libraries.
I've figured out how to use the webAudio API to analyze input from the microphone, but can't seem to gain access to the audio output on my computer.
I've tried to connect source to the destination using
source.connect(audioContext.destination)
but this doesn't seem to do anything.
This is our current listener config:
// // Listener
const bufferSize = 256;
let analyzer;
// The navigator object contains information about the browser.
// this async call initializes audio input from the user
navigator.mediaDevices.getUserMedia({ audio: true, video: false }).then(stream => {
if (!analyzer) initAnalyzer(stream)
})
function initAnalyzer(stream) {
const audioContext = new AudioContext();
// set audio source to input stream from microphone (Web Audio API https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamAudioSourceNode)
const source = audioContext.createMediaStreamSource(stream);
analyzer = Meyda.createMeydaAnalyzer({
audioContext: audioContext,
source: source,
bufferSize: bufferSize,
featureExtractors: [ 'amplitudeSpectrum', 'spectralFlatness' ], // ["rms", "energy"],
callback: features => null
});
analyzer.start();
}
It's not possible to grab the audio from the computer without external software like Audio Hijack. Sorry!
audiooutputs cannot be accessed for privacy, only audioinputs (and only after confirming with the user). On Windows you can enable "stereo mix" that routes all outputs to a virtual input, and you can use that, but it requires all users to have stereo mix enabled...
The visualizers you see are using the buffer or source that they created so of course they have access to it.
I am wanting to create a live audio stream from one device to a node server which can then broadcast that live feed to several front ends.
I have searched extensively for this and have really hit a wall so hoping somebody out there can help.
I am able to get my audio input from the window.navigator.getUserMedia API.
getAudioInput(){
const constraints = {
video: false,
audio: {deviceId: this.state.deviceId ? {exact: this.state.deviceId} : undefined},
};
window.navigator.getUserMedia(
constraints,
this.initializeRecorder,
this.handleError
);
}
This then passes the stream to the initializeRecorder function which utilises the AudioContext API to create a createMediaStreamSource`
initializeRecorder = (stream) => {
const audioContext = window.AudioContext;
const context = new audioContext();
const audioInput = context.createMediaStreamSource(stream);
const bufferSize = 2048;
// create a javascript node
const recorder = context.createScriptProcessor(bufferSize, 1, 1);
// specify the processing function
recorder.onaudioprocess = this.recorderProcess;
// connect stream to our recorder
audioInput.connect(recorder);
// connect our recorder to the previous destination
recorder.connect(context.destination);
}
In my recorderProcess function, I now have an AudioProcessingEvent object which I can stream.
Currently I am emitting the audio event as as a stream via a socket connection like so:
recorderProcess = (e) => {
const left = e.inputBuffer.getChannelData(0);
this.socket.emit('stream', this.convertFloat32ToInt16(left))
}
Is this the best or only way to do this? Is there a better way by using fs.createReadStream and then posting the an endpoint via Axios? As far as I can tell this will only work with a file as opposed to a continuous live stream?
Server
I have a very simple socket server running ontop of express. Currently I listen for the stream event and then emit that same input back out:
io.on('connection', (client) => {
client.on('stream', (stream) => {
client.emit('stream', stream)
});
});
Not sure how scalable this is but if you have a better suggestion, I'm very open to it.
Client
Now this is where I am really stuck:
On my client I am listening for the stream event and want to listen to the stream as audio output in my browser. I have a function that receives the event but am stuck as to how I can use the arrayBuffer object that is being returned.
retrieveAudioStream = () => {
this.socket.on('stream', (buffer) => {
// ... how can I listen to the buffer as audio
})
}
Is the way I am streaming audio the best / only way I can upload to the node server?
How can I listen to the arrayBuffer object that is being returned on my client side?
Is the way I am streaming audio the best / only way I can upload to the node server?
Not really the best but i have seen worse, its not the only way either using websockets its considered ok from point of view since you want things to be "live" and not keep sending http post request every 5sec.
How can I listen to the arrayBuffer object that is being returned on my client side?
You can try this BaseAudioContext.decodeAudioData to listen to data streamed, the example is pretty simple.
From the code snippets you provide i assume you want to build something from scratch to learn how things work.
In that case, you can try MediaStream Recording API along with an websocket server that sends the chunks to X clients so they can reproduce the audio, etc.
It would make sense to invest time into WebRTC API, to learn how to stream from client to another client.
Also take a look at the links below for some useful information.
(stackoverflow) Get live streaming audio from NodeJS server to clients
(github) video-conference-webrtc
twitch.tv tech stack article
rtc.io
I am trying to take audio recorded by one client and send it to other connected clients in realtime. The objective being a sort of "broadcast". I have read many explanations to help guide me, with no luck.
Currently I have the audio being written to file like so:
var fileWriter = new wav.FileWriter(outFile, {
channels: 1,
sampleRate: 44100,
bitDepth: 16
});
client.on('stream', function(stream, meta) {
stream.pipe(fileWriter);
stream.on('end', function() {
fileWriter.end();
console.log('wrote to file ' + outFile);
});
});
});
As you can see, I'm currently using Binaryjs to send the audio data to the server, at which point I pipe the stream to the FileWriter
I then tried to read the file and pipe it to the response
app.get('/audio', function(req, res) {
fs.createReadStream(__dirname + '/demo.wav').pipe(res);
})
As I'm sure you've already noticed, this doesn't work. I thought that (maybe) while the file is being constructed, it would playback all updated content added to the file as well. This didn't happen, it played up to the point a client requested the file and then ended.
I am unsure of how to pass the stream data in real time to the clients requesting it. As a result of being completely new to nodejs I am not sure of the methods and terms used for this procedure and have been unable to find any direct working examples.
In recent days, I tried to use javascript to record audio stream.
I found that there is no example code which works.
Is there any browser supporting?
Here is my code
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia || navigator.msGetUserMedia;
navigator.getUserMedia({ audio: true }, gotStream, null);
function gotStream(stream) {
msgStream = stream;
msgStreamRecorder = stream.record(); // no method record :(
}
getUserMedia gives you access to the device, but it is up to you to record the audio. To do that, you'll want to 'listen' to the device, building a buffer of the data. Then when you stop listening to the device, you can format that data as a WAV file (or any other format). Once formatted you can upload it to your server, S3, or play it directly in the browser.
To listen to the data in a way that is useful for building your buffer, you will need a ScriptProcessorNode. A ScriptProcessorNode basically sits between the input (microphone) and the output (speakers), and gives you a chance to manipulate the audio data as it streams. Unfortunately the implementation is not straightforward.
You'll need:
getUserMedia to access the device
AudioContext to create a MediaStreamAudioSourceNode and a ScriptProcessorNode
MediaStreamAudioSourceNode to represent the audio stream
ScriptProcessorNode to get access to the streaming audio data via an onaudioprocessevent. The event exposes the channel data that you'll build your buffer with.
Putting it all together:
navigator.getUserMedia({audio: true},
function(stream) {
// create the MediaStreamAudioSourceNode
var context = new AudioContext();
var source = context.createMediaStreamSource(stream);
var recLength = 0,
recBuffersL = [],
recBuffersR = [];
// create a ScriptProcessorNode
if(!context.createScriptProcessor){
node = context.createJavaScriptNode(4096, 2, 2);
} else {
node = context.createScriptProcessor(4096, 2, 2);
}
// listen to the audio data, and record into the buffer
node.onaudioprocess = function(e){
recBuffersL.push(e.inputBuffer.getChannelData(0));
recBuffersR.push(e.inputBuffer.getChannelData(1));
recLength += e.inputBuffer.getChannelData(0).length;
}
// connect the ScriptProcessorNode with the input audio
source.connect(node);
// if the ScriptProcessorNode is not connected to an output the "onaudioprocess" event is not triggered in chrome
node.connect(context.destination);
},
function(e) {
// do something about errors
});
Rather than building all of this yourself I suggest you use the AudioRecorder code, which is awesome. It also handles writing the buffer to a WAV file. Here is a demo.
Here's another great resource.
for browsers that support MediaRecorder API, use it.
for older browsers that does not support MediaRecorder API, there are three ways to do it
as wav
all code client-side.
uncompressed recording.
source code --> http://github.com/mattdiamond/Recorderjs
as mp3
all code client-side.
compressed recording.
source code --> http://github.com/Mido22/mp3Recorder
as opus packets (can get output as wav, mp3 or ogg)
client and server(node.js) code.
compressed recording.
source code --> http://github.com/Mido22/recordOpus
You could check this site:
https://webaudiodemos.appspot.com/AudioRecorder/index.html
It stores the audio into a file (.wav) on the client side.
There is a bug that currently does not allow audio only. Please see http://code.google.com/p/chromium/issues/detail?id=112367
Currently, this is not possible without sending the data over to the server side. However, this would soon become possible in the browser if they start supporting the MediaRecorder working draft.