Real time audio streaming from ffmpeg to browser (am I missing something?) - javascript

I have tried a couple of solutions already, but nothing works for me.
I want to stream audio from my PC to another computer with almost zero latency. Things are working fine so far in a sense of lagging and everything, sound is clear and not choppy at all, but there is something like a delay between the moment when audio starts playing on my PC and remote PC. For example when I click on Youtube 'play' button audio starts playing only after 3-4 seconds on the remote machine. The same when I click 'Pause', the sound on the remote PC stops after a couple of seconds.
I've tried to use websockets\plain audio tag, but no luck so far.
For example this is my solution by using websockets and pipes:
def create_pipe():
return win32pipe.CreateNamedPipe(r'\\.\pipe\__audio_ffmpeg', win32pipe.PIPE_ACCESS_INBOUND,
win32pipe.PIPE_TYPE_MESSAGE |
win32pipe.PIPE_READMODE_MESSAGE |
win32pipe.PIPE_WAIT, 1, 1024 * 8, 1024 * 8, 0, None)
async def echo(websocket):
pipe = create_pipe()
win32pipe.ConnectNamedPipe(pipe, None)
while True:
data = win32file.ReadFile(pipe, 1024 * 2)
await websocket.send(data[1])
async def main():
async with websockets.serve(echo, "0.0.0.0", 7777):
await asyncio.Future() # run forever
if __name__ == '__main__':
asyncio.run(main())
The way I start ffmpeg
.\ffmpeg.exe -f dshow -i audio="Stereo Mix (Realtek High Definition Audio)" -acodec libmp3lame -ab 320k -f mp3 -probesize 32 -muxdelay 0.01 -y \\.\pipe\__audio_ffmpeg
On the JS side the code is a little bit long, but essentially I am just reading a web socket and appending to buffer
this.buffer = this.mediaSource.addSourceBuffer('audio/mpeg')
Also as you see I tried to use -probesize 32 -muxdelay 0.01 flags, but no luck as well
I tried to use plain tag as well, but still - this couple-of-seconds delay exists
What can I do? Am I missing something? Maybe I have to disable buffering somewhere?

I have some code, but all I learned was from https://webrtc.github.io/samples/ website and some from MDN. It's pretty simple.
The idea is to connect 2 peers using a negotiating server just for the initial connection. Afterwards they can share streams (audio, video, data). When I say peers I mean client computers like browsers.
So here's an example for connecting, and broadcasting and of course receiving.
Now for some of my code.
a sketch of the process
note: the same code is used for connecting to and connecting from. this is how my app works bcz it's kind of like a chat. ClientOutgoingMessages and ClientIncomingMessages are just my wrapper around sending messages to server (I use websockets, but it's possible also ajax).
Start: peer initiates RTCPeerConnection and sends an offer via server. also setup events for receiving. The other peer is notified of the offer by the server, then sends answer the same way (should he choose to) and finally the original peer accepts the answer and starts streaming. Among this there is another event about candidate I didn't even bothered to know what it is. It works without knowing it.
function create_pc(peer_id) {
var pc = new RTCPeerConnection(configuration);
var sender
var localStream = MyStreamer.get_dummy_stream();
for (var track of localStream.getTracks()) {
sender = pc.addTrack(track, localStream);
}
// when a remote user adds stream to the peer connection, we display it
pc.ontrack = function (e) {
console.log("got a remote stream")
remoteVideo.style.visibility = 'visible'
remoteVideo.srcObject = e.streams[0]
};
// Setup ice handling
pc.onicecandidate = function (ev) {
if (ev.candidate) {
ClientOutgoingMessages.candidate(peer_id, ev.candidate);
}
};
// status
pc.oniceconnectionstatechange = function (ev) {
var state = pc.iceConnectionState;
console.log("oniceconnectionstatechange: " + state)
};
MyRTC.set_pc(peer_id, {
pc: pc,
sender: sender
});
return pc;
}
function offer_someone(peer_id, peer_name) {
var pc = MyRTC.create_pc(peer_id)
pc.createOffer().then(function (offer) {
ClientOutgoingMessages.offer(peer_id, offer);
pc.setLocalDescription(offer);
});
}
function answer_offer(peer_id) {
var pc = MyRTC.create_pc(peer_id)
var offer = MyOpponents.get_offer(peer_id)
pc.setRemoteDescription(new RTCSessionDescription(offer));
pc.createAnswer().then(function (answer) {
pc.setLocalDescription(answer);
ClientOutgoingMessages.answer(peer_id, answer);
// alert ("rtc established!")
MyStreamer.stream_current();
});
}
handling messages from server
offer: function offer(data) {
if (MyRTC.get_pc(data.connectedUser)) {
// alert("Not accepting offers already have a conn to " + data.connectedUser)
// return;
}
MyOpponents.set_offer(data.connectedUser, data.offer)
},
answer: function answer(data) {
var opc = MyRTC.get_pc(data.connectedUser)
opc && opc.pc.setRemoteDescription(new RTCSessionDescription(data.answer)).catch(function (err) {
console.error(err)
// alert (err)
});
// alert ("rtc established!")
MyStreamer.stream_current();
},
candidate: function candidate(data) {
var opc = MyRTC.get_pc(data.connectedUser)
opc && opc.pc.addIceCandidate(new RTCIceCandidate(data.candidate));
},
leave: function leave(data) {
MyRTC.close_pc(data.connectedUser);
},

Related

iPhone 14 won't record using MediaRecorder

Our website records audio and plays it back for a user. It has worked for years with many different devices, but it started failing on the iPhone 14. I created a test app at https://nmp-recording-test.netlify.app/ so I can see what is going on. It works perfectly on all devices but it only works the first time on an iPhone 14. It works on other iPhones and it works on iPad and MacBooks using Safari or any other browser.
It looks like it will record if that is the first audio you ever do. If I get an AudioContext somewhere else the audio playback will work for that, but then the recording won't.
The only symptom I can see is that it doesn't call MediaRecorder.ondataavailable when it is not working, but I assume that is because it isn't recording.
Here is the pattern that I'm seeing with my test site:
Click "new recording". (the level indicator moves, the data available callback is triggered)
Click "listen" I hear what I just did
Click "new recording". (no levels move, no data is reported)
Click "listen" nothing is played.
But if I do anything, like click the metronome on and off then it won't record the FIRST time, either.
The "O.G. Recording" is the original way I was doing the recording, using deprecated method createMediaStreamSource() and createScriptProcessor()/createJavaScriptNode(). I thought maybe iPhone finally got rid of that, so I created the MediaRecorder version.
What I'm doing, basically, is (truncated to show the important part):
const chunks = []
function onSuccess(stream: MediaStream) {
mediaRecorder = new MediaRecorder(stream);
mediaRecorder.ondataavailable = function (e) {
chunks.push(e.data);
}
mediaRecorder.start(1000);
}
navigator.mediaDevices.getUserMedia({ audio: true }).then(onSuccess, onError);
Has anyone else seen anything different in the way the iPhone 14 handles recording?
Does anyone have a suggestion about how to debug this?
If you have an iPhone 14, would you try my test program above and let me know if you get the same results? We only have one iPhone 14 to test with, and maybe there is something weird about that device.
If it works you should see a number of lines something like data {"len":6784} appear every second when you are recording.
--- EDIT ---
I reworked the code similar to Frank zeng's suggestion and I am getting it to record, but it is still not right. The volume is really low, it looks like there are some dropouts, and there is a really long pause when resuming the AudioContext.
The new code seems to work perfectly in the other devices and browsers I have access to.
--- EDIT 2 ---
There were two problems - one is that the deprecated use of createScriptProcessor stopped working but the second one was an iOS bug that was fixed in version 16.2. So rewriting to use the AudioWorklet was needed, but keeping the recording going once it is started is not needed.
I have the same problem as you,I think the API of AudioContent.createScriptProcessor is Invalid in Iphone14, I used new API About AudioWorkletNode to replace it. And don't closed the stream, Because the second recording session of iPhone 14 is too laggy, Remember to destroy the data after recording. After testing, I have solved this problem,Here's my code,
// get stream
window.navigator.mediaDevices.getUserMedia(options).then(async (stream) => {
// that.stream = stream
that.context = new AudioContext()
await that.context.resume()
const rate = that.context.sampleRate || 44100
that.mp3Encoder = new lamejs.Mp3Encoder(1, rate, 128)
that.mediaSource = that.context.createMediaStreamSource(stream)
// API开始逐步淘汰了,如果可用则继续用,如果不可用则采用worklet方案写入音频数据
if (that.context.createScriptProcessor && typeof that.context.createScriptProcessor === 'function') {
that.mediaProcessor = that.context.createScriptProcessor(0, 1, 1)
that.mediaProcessor.onaudioprocess = event => {
window.postMessage({ cmd: 'encode', buf: event.inputBuffer.getChannelData(0) }, '*')
that._decode(event.inputBuffer.getChannelData(0))
}
} else { // 采用新方案
that.mediaProcessor = await that.initWorklet()
}
resolve()
})
// content of audioworklet function
async initWorklet() {
try {
/*音频流数据分析节点*/
let audioWorkletNode;
/*---------------加载AudioWorkletProcessor模块并将其添加到当前的Worklet----------------------------*/
await this.context.audioWorklet.addModule('/get-voice-node.js');
/*---------------AudioWorkletNode绑定加载后的AudioWorkletProcessor---------------------------------*/
audioWorkletNode = new AudioWorkletNode(this.context, "get-voice-node");
/*-------------AudioWorkletNode和AudioWorkletProcessor通信使用MessagePort--------------------------*/
console.log('audioWorkletNode', audioWorkletNode)
const messagePort = audioWorkletNode.port;
messagePort.onmessage = (e) => {
let channelData = e.data[0];
window.postMessage({ cmd: 'encode', buf: channelData }, '*')
this._decode(channelData)
}
return audioWorkletNode;
} catch (e) {
console.log(e)
}
}
// content of get-voice-node.js, Remember to put it in the static resource directory
class GetVoiceNode extends AudioWorkletProcessor {
/*
* options由new AudioWorkletNode()时传递
* */
constructor() {
super()
}
/*
* `inputList`和outputList`都是输入或输出的数组
* 比较坑的是只有128个样本???如何设置
* */
process (inputList, outputList, parameters) {
// console.log(inputList)
if(inputList.length>0&&inputList[0].length>0){
this.port.postMessage(inputList[0]);
}
return true //回来让系统知道我们仍处于活动状态并准备处理音频。
}
}
registerProcessor('get-voice-node', GetVoiceNode)
Destroy the recording instance and free the memory,if want use it the nextTime,you have better create new instance
this.recorder.stop()
this.audioDurationTimer && window.clearInterval(this.audioDurationTimer)
const audioBlob = this.recorder.getMp3Blob()
// Destroy the recording instance and free the memory
this.recorder = null

webrtc audio device disconnection and reconnection

I have a video call application based on WebRTC. It is working as expected. However when call is going on, if I disconnect and connect back audio device (mic + speaker), only speaker part is working. The mic part seems to be not working - the other side can't hear anymore.
Is there any way to inform WebRTC to take audio input again once audio device is connected back?
Is there any way to inform WebRTC to take audio input again once audio device is connected back?
Your question appears simple—the symmetry with speakers is alluring—but once we're dealing with users who have multiple cameras and microphones, it's not that simple: If your user disconnects their bluetooth headset they were using, should you wait for them to reconnect it, or immediately switch to their laptop microphone? If the latter, do you switch back if they reconnect it later? These are application decisions.
The APIs to handle these things are: primarily the ended and devicechange events, and the replaceTrack() method. You may also need the deviceId constraint, and the enumerateDevices() method to a handle multiple devices.
However, to keep things simple, let's take the assumptions in your question at face value to explore the APIs:
When the user unplugs their sole microphone (not their camera) mid-call, our job is to resume conversation with it when they reinsert it, without dropping video:
First, we listen to the ended event to learn when our local audio track drops.
When that happens, we listen for a devicechange event to detect re-insertion (of anything).
When that happens, we could check what changed using enumerateDevices(), or simply try getUserMedia again (microphone only this time).
If that succeeds, use await sender.replaceTrack(newAudioTrack) to send our new audio.
This might look like this:
let sender;
(async () => {
try {
const stream = await navigator.mediaDevices.getUserMedia({video: true, audio: true});
pc.addTrack(stream.getVideoTracks()[0], stream);
sender = pc.addTrack(stream.getAudioTracks()[0], stream);
sender.track.onended = () => navigator.mediaDevices.ondevicechange = tryAgain;
} catch (e) {
console.log(e);
}
})();
async function tryAgain() {
try {
const stream = await navigator.mediaDevices.getUserMedia({audio: true});
await sender.replaceTrack(stream.getAudioTracks()[0]);
navigator.mediaDevices.ondevicechange = null;
sender.track.onended = () => navigator.mediaDevices.ondevicechange = tryAgain;
} catch (e) {
if (e.name == "NotFoundError") return;
console.log(e);
}
}
// Your usual WebRTC negotiation code goes here
The above is for illustration only. I'm sure there are lots of corner cases to consider.

NodeJS ZeroMQ - When producer is ready to send message after connection?

I have made small reasearch about patterns supported by zeromq. I would like to describe problem with PUB/SUB pattern, but probably I discover this problem in my recent project also in PUSH/PULL pattern. I use NodeJS zeromq implementation.
I prepare two examples (server.js & client.js). I recognized that first message from server.js is lost every time I restart server (message is send every 1 second). client.js doesn't get first message. It is probably caused by to short time before sending messages. When I start sending messages after some time (e.g. 1 second) everything works fine. I thing that zmq needs some time for initialization connection between publisher and subscriber.
I would like to know when producer (server) is ready to sending messages for subscribed clients. How get this information?
I don't understand why client.js connected and subscribed for messages doesn't get it, because server is not ready for support subscriptions after restart.
Maybe it works like this by design.
server.js:
var zmq = require('zmq');
console.log('server zmq: ' + zmq.version);
var publisher = zmq.socket('pub');
publisher.bindSync("tcp://*:5555");
var i = 0;
var msg = "get_status OK ";
function sendMsg () {
console.log(msg + i);
publisher.send(msg + i);
i++;
setTimeout(sendMsg, 1000);
}
sendMsg();
process.on('SIGINT', function() {
publisher.close();
process.exit();
});
client.js:
var zmq = require('zmq');
console.log('client zmq: ' + zmq.version);
var subscriber = zmq.socket('sub');
subscriber.subscribe("get_status");
subscriber.on('message', function(data) {
console.log(data.toString());
});
subscriber.connect("tcp://127.0.0.1:5555");
process.on('SIGINT', function() {
subscriber.close();
process.exit();
});
In the node zmq lib repo you have stated the supported monitoring events. Subscribing to this will allow you to monitor your connection, in this case the accept event. However don't forget that you'll also have to call the monitor() function on the socket to activate monitoring.
You should end up with something like:
var publisher = zmq.socket('pub');
publisher.on('accept', function(fd, ep) {
sendMsg();
});
publisher.monitor(100, 0);
publisher.bindSync("tcp://*:5555");

node-serialport on windows with multiple devices hangs

I've been experimenting with node-serialport library to access devices connected to a USB hub and send/receive data to these devices. The code works fine on linux but on windows(windows 8.1 and windows 7) I get some odd behaviour. It doesn't seem to work for more than 2 devices, it just hangs when writing to the port. The callback for write method never gets called. I'm not sure how to go about debugging this issue. I'm not a windows person, if someone can give me some directions it would be great.
Below is the code I'm currently using to test.
/*
Sample code to debug node-serialport library on windows
*/
//var SerialPort = require("./build/Debug/serialport");
var s = require("./serialport-logger");
var parsers = require('./parsers');
var ee = require('events');
s.list(function(err, ports) {
console.log("Number of ports available: " + ports.length);
ports.forEach(function(port) {
var cName = port.comName,
sp;
//console.log(cName);
sp = new s.SerialPort(cName, {
parser: s.parsers.readline("\r\n")
}, false);
// sp.once('data', function(data) {
// if (data) {
// console.log("Retrieved data " + data);
// //console.log(data);
// }
// });
//console.log("Is port open " + sp.isOpen());
if(!sp.isOpen()) {
sp.open(function(err) {
if(err) {
console.log("Port cannot be opened manually");
} else {
console.log("Port is open " + cName);
sp.write("LED=2\r\n", function(err) {
if (err) {
console.log("Cannot write to port");
console.error(err);
} else {
console.log("Written to port " + cName);
}
});
}
});
}
//sp.close();
});
});
I'm sure you'd have noticed I'm not require'ing serialport library instead I'm using serialport-logger library it's just a way to use the serialport addons which are compiled with debug switch on windows box.
TLDR; For me it works by increasing the threadpool size for libuv.
$ UV_THREADPOOL_SIZE=20 && node server.js
I was fine with opening/closing port for each command for a while but a feature request I'm working on now needs to keep the port open and reuse the connection to run the commands. So I had to find an answer for this issue.
The number of devices I could support by opening a connection and holding on to it is 3. The issue happens to be the default threadpool size of 4. I already have another background worker occupying 1 thread so I have only 3 threads left. The EIO_WatchPort function in node-serialport runs as a background worker which results in blocking a thread. So when I use more than 3 devices the "open" method call is waiting in the queue to be pushed to the background worker but since they are all busy it blocks node. Then any subsequent requests cannot be handled by node. Finally increasing the thread pool size did the trick, it's working fine now. It might help someone. Also this thread definitely helped me.
As opensourcegeek pointed all u need to do is to set UV_THREADPOOL_SIZE variable above default 4 threads.
I had problems at my project with node.js and modbus-rtu or modbus-serial library when I tried to query more tan 3 RS-485 devices on USB ports. 3 devices, no problem, 4th or more and permanent timeouts. Those devices responded in 600 ms interval each, but when pool was busy they never get response back.
So on Windows simply put in your node.js environment command line:
set UV_THREADPOOL_SIZE=8
or whatever u like till 128. I had 6 USB ports queried so I used 8.

Trouble with WebRTC in Nightly (22) and Chrome (25)

I'm experimenting with WebRTC between two browsers using RTCPeerConnection and my own long-polling implementation. I've created demo application, which successfully works with Mozilla Nightly (22), however in Chrome (25), I can't get no remote video and only "empty black video" appears. Is there something wrong in my JS code?
Function sendMessage(message) sends message to server via long-polling and on the other side, it is accepted using onMessage()
var peerConnection;
var peerConnection_config = {"iceServers": [{"url": "stun:23.21.150.121"}]};
// when message from server is received
function onMessage(evt) {
if (!peerConnection)
call(false);
var signal = JSON.parse(evt);
if (signal.sdp) {
peerConnection.setRemoteDescription(new RTCSessionDescription(signal.sdp));
} else {
peerConnection.addIceCandidate(new RTCIceCandidate(signal.candidate));
}
}
function call(isCaller) {
peerConnection = new RTCPeerConnection(peerConnection_config);
// send any ice candidates to the other peer
peerConnection.onicecandidate = function(evt) {
sendMessage(JSON.stringify({"candidate": evt.candidate}));
};
// once remote stream arrives, show it in the remote video element
peerConnection.onaddstream = function(evt) {
// attach media stream to local video - WebRTC Wrapper
attachMediaStream($("#remote-video").get("0"), evt.stream);
};
// get the local stream, show it in the local video element and send it
getUserMedia({"audio": true, "video": true}, function(stream) {
// attach media stream to local video - WebRTC Wrapper
attachMediaStream($("#local-video").get("0"), stream);
$("#local-video").get(0).muted = true;
peerConnection.addStream(stream);
if (isCaller)
peerConnection.createOffer(gotDescription);
else {
peerConnection.createAnswer(gotDescription);
}
function gotDescription(desc) {
sendMessage(JSON.stringify({"sdp": desc}));
peerConnection.setLocalDescription(desc);
}
}, function() {
});
}
My best guess is that there is a problem with your STUN server configuration. To determine if this is the issue, try using google's public stun server stun:stun.l.google.com:19302 (which won't work in Firefox, but should definitely work in Chrome) or test on a local network with no STUN server configured.
Also, verify that your ice candidates are being delivered properly. Firefox doesn't actually generate 'icecandidate' events (it includes the candidates in the offer/answer), so an issue with delivering candidate messages could also explain the discrepancy.
Make sure your video tag attribute autoplay is set to 'autoplay'.

Categories