I have node js server which receives BufferArrays over UDP, once a packet is received, socket.io do the next job which is emitting the BufferArray to the client (Angular).
Here's a sample code of the node js server implementation:
io.on("connection", () => {
server.on("listening", () => {
let address = server.address();
console.log("server host", address.address);
console.log("server port", address.port);
});
server.on("message", function (message, remote) {
arrayBuffer = message.slice(28);
io.emit("audio", buffer); // // console.log(`Received packet: ${remote.address}:${remote.port}`)
});
server.bind(PORT, HOST);
});
In the client-side, I've created a blob object from the received BufferArray, then created an URL object to pass it to the audio element src.
socket.on('audio', (audio) => {
this.startAudio(audio);
});
private startAudio(audio) {
const blob = new Blob([audio], { type: "audio/wav" });
var audioElement = document.createElement('audio');
// // console.log('blob', blob);
var url = window.URL.createObjectURL(blob);
// // console.log('url', url);
audioElement.src = url;
audioElement.play();
}
But it seems, I'm doing it wrong since the audio isn't being played!!
So, I'm very curious to know the correct process of playing live audio streams in the browser.
I was able to play live audio in the browser by using FFmpeg, here's the link to the answer
Related
I am using MediaStream Recording API to record audio in the browser, like this (courtesy https://github.com/bryanjenningz/record-audio):
const recordAudio = () =>
new Promise(async resolve => {
// This wants to be secure. It will throw unless served from https:// or localhost.
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
const mediaRecorder = new MediaRecorder(stream);
let audioChunks = [];
mediaRecorder.addEventListener('dataavailable', event => {
audioChunks.push(event.data);
console.log("Got audioChunk!!", event.data.size, event.data.type);
// mediaRecorder.requestData()
});
const start = () => {
audioChunks = [];
mediaRecorder.start(1000); // milliseconds per recorded chunk
};
const stop = () =>
new Promise(resolve => {
mediaRecorder.addEventListener('stop', () => {
const audioBlob = new Blob(audioChunks, { type: 'audio/mpeg' });
const audioUrl = URL.createObjectURL(audioBlob);
const audio = new Audio(audioUrl);
const play = () => audio.play();
resolve({ audioChunks, audioBlob, audioUrl, play });
});
mediaRecorder.stop();
});
resolve({ start, stop });
});
I would like to modify this code to start streaming to nodejs while it's still recording. I understand the header won't be complete until it finished the recording. I can either account for that on nodejs, or perhaps I can live with invalid headers, because I'll be feeding this into ffmpeg on nodejs anyway. How do I do this?
The trick is when you start your recorder, start it like this mediaRecorder.start(timeSlice), where timeSlice is the number of milliseconds the browser waits before emitting a dataavailable event with a blob of data.
Then, in your event handler for dataavailable you call the server:
mediaRecorder.addEventListener('dataavailable', event => {
myHTTPLibrary.post(event.data);
});
That's the general solution. It's not possible to insert an example here, because a code sandbox can't ask you to use your webcam, but I've created one here. It simply sends your data to Request Bin, where you can watch the data stream in.
There are some other things you'll need to think about if you want to stitch the video or audio back together. The blog post touches on that.
(I didn't really understood other solutions.)
I'm streaming a webcam video from a browser client (index.html) to a WebSocket server (app.js/node.js).
The server is receiving these frames encoded as base64 and generates images out.png inside a static/img folder.
I would like to delete this out.png image after the client stopped streaming.
const wsServer = new WebSocket.Server({ server: httpServer });
// array of connected websocket clients
let connectedClients = [];
wsServer.on('connection', (ws, req) => {
console.log('Connected');
// add new connected client
connectedClients.push(ws);
// listen for messages from the streamer, the clients will not send anything so we don't need to filter
ws.on('message', data => {
// send the base64 encoded frame to each connected ws
connectedClients.forEach((ws, i) => {
if (ws.readyState === ws.OPEN) { // check if it is still connected
// create static image of the base64 encoded frame
fs.writeFile(path.resolve(__dirname, './static/img/out.png'), data.replace(/^data:image\/png;base64,/, ""), 'base64', function(err) {
if (err) throw err;
});
} else { // if it's not connected remove from the array of connected ws
connectedClients.splice(i, 1);
}
});
});
});
I tried this approach which looks like it would make sense. But .on("close") is not actively checking if the streamer is still connected. How can I do that? So, the server should regularly check if the streamer stopped streaming and clean up the mess which he/she left behind :)
// If the client is not streaming or stopped streaming
wsServer.on('close', (ws, req) => {
console.log('Disconnected');
// delete last image if streamer does not send new messages
fs.unlink(path.resolve(__dirname, './static/img/out.png'), function (err) {
if (err) throw err;
});
});
I am currently trying to stream a .webm video file via socket.io to my client (currently using Chrome as client).
Appending the first Uint8Array to the SourceBuffer works fine but appending further ones does not work and throws the following error:
Uncaught DOMException: Failed to execute 'appendBuffer' on 'SourceBuffer': The HTMLMediaElement.error attribute is not null.
My current code:
'use strict';
let socket = io.connect('http://localhost:1337');
let mediaSource = new MediaSource();
let video = document.getElementById("player");
let queue = [];
let sourceBuffer;
video.src = window.URL.createObjectURL(mediaSource);
mediaSource.addEventListener('sourceopen', function() {
sourceBuffer = mediaSource.addSourceBuffer('video/webm; codecs="vorbis,vp8"');
socket.on("video", function(data) {
let uIntArray = new Uint8Array(data);
if (!sourceBuffer.updating) {
sourceBuffer.appendBuffer(uIntArray);
} else {
queue.push(data);
}
});
});
Server side code (snippet)
io.on('connection', function(socket) {
console.log("Client connected");
let readStream = fs.createReadStream("bunny.webm");
readStream.addListener('data', function(data) {
socket.emit('video', data);
});
});
I also removed the webkit checks since this will only run on Chromium browsers.
I think you have to free the buffer, see the remove() function
http://w3c.github.io/media-source/#widl-SourceBuffer-remove-void-double-start-unrestricted-double-end
Let me know if it helped.
I've been trying to create a Node.js audio streaming script using Socket.io and Node.js microphone library.
The problem is that Socket.io does not pipe the audio to a remote server when SoX/ALSA is still recording the audio.
audio-stream-tx (client):
var io = require('socket.io-client');
var ss = require('socket.io-stream');
var mic = require('microphone');
var fs = require('fs');
var socket = io.connect('ws://localhost:25566');
var stream = ss.createStream();
ss(socket).emit('stream-data', stream);
mic.startCapture();
mic.audioStream.pipe(stream);
process.on('SIGINT', function () {
mic.stopCapture();
console.log('Got SIGINT. Press Control-D to exit.');
});
audio-stream-rx (server):
var io = require('socket.io')();
var ss = require('socket.io-stream');
var fs = require('fs');
var port = 25566;
io.on('connection', function (socket) {
ss(socket).on('stream-data', function(stream, data) {
console.log('Incoming> Receiving data stream.');
stream.pipe(fs.createWriteStream(Date.now() + ".wav", { flags: 'a' }));
});
});
io.listen(port);
The microphone library spawns either a SoX/ALSA command (depending on the platform) to record the microphone's audio. The scripts above work fine though. Just that the audio data is only piped to the stream once mic.stopCapture() is called.
Is there a workaround to force socket.io-stream to stream audio data the moment mic.startCapture() is called?
I am making an application where I want the user to use their mic (on their phone) and be able to talk to each other in the game lobby. However, this has proven to be more than difficult.
I am using Node JS socket io and socket io stream
on my client I am using the audio api to take my microphones input ( I am not really worried about this all that much because I am going to make this a Native IOS app)
navigator.getUserMedia = ( navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia);
if (navigator.getUserMedia) {
navigator.getUserMedia (
// constraints
{
video: false,
audio: true
},
function(localMediaStream) {
var video = document.querySelector('audio');
video.src = window.URL.createObjectURL(localMediaStream);
lcm = localMediaStream;
var audioContext = window.AudioContext;
var context = new audioContext();
var audioInput = context.createMediaStreamSource(localMediaStream);
var bufferSize = 2048;
// create a javascript node
var recorder = context.createScriptProcessor(bufferSize, 1, 1);
// specify the processing function
recorder.onaudioprocess = recorderProcess;
// connect stream to our recorder
audioInput.connect(recorder);
// connect our recorder to the previous destination
recorder.connect(context.destination);
},
// errorCallback
function(err) {
console.log("The following error occured: " + err);
$("video").remove();
alert("##");
}
);
} else {
console.log("getUserMedia not supported");
}
function recorderProcess(e) {
var left = e.inputBuffer.getChannelData(0);
window.stream.write(convertFloat32ToInt16(left));
//var f = $("#aud").attr("src");
var src = window.URL.createObjectURL(lcm);
ss(socket).emit('file', src, {size: src.size});
ss.createBlobReadStream(src).pipe(window.stream);
//ss.createReadStream(f).pipe(widnow.stream);
}
function convertFloat32ToInt16(buffer)
{
l = buffer.length;
buf = new Int16Array(l);
while (l--) {
buf[l] = Math.min(1, buffer[l])*0x7FFF;
}
return buf.buffer;
}
});
ss(socket).on('back', function(stream, data) {
//console.log(stream);
var video = document.querySelector('audio');
video.src = window.URL.createObjectURL(stream);
console.log("getting mic data");
});
i which I can successfully listen to my self speak on the microphone. I am using the stream socket to create a blob to upload to my server...
index.ss(socket).on('file', function(stream, data) {
console.log("getting stream");
var filename = index.path.basename(data.name);
//var myfs = index.fs.createWriteStream(filename);
var fileWriter = new index.wav.FileWriter('demo.wav', {
channels: 1,
sampleRate: 48000,
bitDepth: 16
});
var streams = index.ss.createStream();
streams.pipe(fileWriter);
index.ss(socket).emit('back', fileWriter, {size: fileWriter.size});
});
I cannot get the stream to write to a file or even a temporary buffer, and Then stream back to a client so I can then play or "stream" the audio real time. After a while the server crashes with saying that the pipe is not writable.
Has anyone else encountered this?
By using SFMediaStream library you can socket.io and Nodejs server for live streaming your microphone from a browser. But this library still need some improvement before release to the production.
For the presenter
var mySocket = io("/", {transports:['websocket']});
// Set latency to 100ms (Equal with streamer)
var presenterMedia = new ScarletsMediaPresenter({
audio:{
channelCount:1,
echoCancellation: false
}
}, 100);
// Every new client streamer must receive this header buffer data
presenterMedia.onRecordingReady = function(packet){
mySocket.emit('bufferHeader', packet);
}
// Send buffer to the server
presenterMedia.onBufferProcess = function(streamData){
mySocket.emit('stream', streamData);
}
presenterMedia.startRecording();
For the streamer
var mySocket = io("/", {transports:['websocket']});
// Set latency to 100ms (Equal with presenter)
var audioStreamer = new ScarletsAudioBufferStreamer(100);
audioStreamer.playStream();
// Buffer header must be received first
mySocket.on('bufferHeader', function(packet){
audioStreamer.setBufferHeader(packet);
});
// Receive buffer and play it
mySocket.on('stream', function(packet){
// audioStreamer.realtimeBufferPlay(packet);
audioStreamer.receiveBuffer(packet);
});
// Request buffer header
mySocket.emit('requestBufferHeader', '');
Or you can test it from your localhost with this example