I am using SimpleWebRTC library found here: https://simplewebrtc.com
I got the signal-master running which has STUN/TURN configured properly. It's capable of detecting other peers so I assume STUN/TURN is functional. My problem is that when a peer starts their local video, other peers are not discovering it UNLESS they reload the page. I want it so it is automatically pushed to other peers without the need of reloading the page. I think it has to do with the code below (which I took from the example) but I am not sure.
The reason I have autoRequestMedia to false is because I want users to be able to view the other peers' cameras without having to turn their own devices on (also why I don't have webrtc.joinRoom in the readyToCall event).
Currently, users click on a button and it will trigger the startLocalVideo(); and the video is created in the element. Problem is nothing gets pushed to other peers unless the other peers reload the page. Hope that explains it all, let me know if you need more details.
var webrtc = new SimpleWebRTC({
// the id/element dom element that will hold "our" video
localVideoEl: 'localCam',
// the id/element dom element that will hold remote videos
remoteVideosEl: '',
// immediately ask for camera access
autoRequestMedia: false,
autoRemoveVideos: true,
url: 'MY SIGNAL-MASTER URL HERE',
localVideo: {
autoplay: true, // automatically play the video stream on the page
mirror: false, // flip the local video to mirror mode (for UX)
muted: true // mute local video stream to prevent echo
}
});
webrtc.joinRoom('testchannel');
// a peer video has been added
webrtc.on('videoAdded', function (video, peer) {
console.log('video added', peer);
var remotes = document.getElementById('remoteCams');
if (remotes) {
var container = document.createElement('div');
container.className = 'videoContainer';
container.id = 'container_' + webrtc.getDomId(peer);
container.appendChild(video);
// suppress contextmenu
// video.oncontextmenu = function () { return false; };
remotes.appendChild(container);
}
});
// a peer video was removed
webrtc.on('videoRemoved', function (video, peer) {
console.log('video removed ', peer.nick);
var remotes = document.getElementById('remoteCams');
var el = document.getElementById(peer ? 'container_' + webrtc.getDomId(peer) : 'localScreenContainer');
if (remotes && el) {
remotes.removeChild(el);
}
});
You have to put the join statement into the readyToCall listener :
webrtc.on('readyToCall', function() {
webrtc.joinRoom('roomname');
})
or
put the joinRoom call in a setTimout function.
Related
I have tried a couple of solutions already, but nothing works for me.
I want to stream audio from my PC to another computer with almost zero latency. Things are working fine so far in a sense of lagging and everything, sound is clear and not choppy at all, but there is something like a delay between the moment when audio starts playing on my PC and remote PC. For example when I click on Youtube 'play' button audio starts playing only after 3-4 seconds on the remote machine. The same when I click 'Pause', the sound on the remote PC stops after a couple of seconds.
I've tried to use websockets\plain audio tag, but no luck so far.
For example this is my solution by using websockets and pipes:
def create_pipe():
return win32pipe.CreateNamedPipe(r'\\.\pipe\__audio_ffmpeg', win32pipe.PIPE_ACCESS_INBOUND,
win32pipe.PIPE_TYPE_MESSAGE |
win32pipe.PIPE_READMODE_MESSAGE |
win32pipe.PIPE_WAIT, 1, 1024 * 8, 1024 * 8, 0, None)
async def echo(websocket):
pipe = create_pipe()
win32pipe.ConnectNamedPipe(pipe, None)
while True:
data = win32file.ReadFile(pipe, 1024 * 2)
await websocket.send(data[1])
async def main():
async with websockets.serve(echo, "0.0.0.0", 7777):
await asyncio.Future() # run forever
if __name__ == '__main__':
asyncio.run(main())
The way I start ffmpeg
.\ffmpeg.exe -f dshow -i audio="Stereo Mix (Realtek High Definition Audio)" -acodec libmp3lame -ab 320k -f mp3 -probesize 32 -muxdelay 0.01 -y \\.\pipe\__audio_ffmpeg
On the JS side the code is a little bit long, but essentially I am just reading a web socket and appending to buffer
this.buffer = this.mediaSource.addSourceBuffer('audio/mpeg')
Also as you see I tried to use -probesize 32 -muxdelay 0.01 flags, but no luck as well
I tried to use plain tag as well, but still - this couple-of-seconds delay exists
What can I do? Am I missing something? Maybe I have to disable buffering somewhere?
I have some code, but all I learned was from https://webrtc.github.io/samples/ website and some from MDN. It's pretty simple.
The idea is to connect 2 peers using a negotiating server just for the initial connection. Afterwards they can share streams (audio, video, data). When I say peers I mean client computers like browsers.
So here's an example for connecting, and broadcasting and of course receiving.
Now for some of my code.
a sketch of the process
note: the same code is used for connecting to and connecting from. this is how my app works bcz it's kind of like a chat. ClientOutgoingMessages and ClientIncomingMessages are just my wrapper around sending messages to server (I use websockets, but it's possible also ajax).
Start: peer initiates RTCPeerConnection and sends an offer via server. also setup events for receiving. The other peer is notified of the offer by the server, then sends answer the same way (should he choose to) and finally the original peer accepts the answer and starts streaming. Among this there is another event about candidate I didn't even bothered to know what it is. It works without knowing it.
function create_pc(peer_id) {
var pc = new RTCPeerConnection(configuration);
var sender
var localStream = MyStreamer.get_dummy_stream();
for (var track of localStream.getTracks()) {
sender = pc.addTrack(track, localStream);
}
// when a remote user adds stream to the peer connection, we display it
pc.ontrack = function (e) {
console.log("got a remote stream")
remoteVideo.style.visibility = 'visible'
remoteVideo.srcObject = e.streams[0]
};
// Setup ice handling
pc.onicecandidate = function (ev) {
if (ev.candidate) {
ClientOutgoingMessages.candidate(peer_id, ev.candidate);
}
};
// status
pc.oniceconnectionstatechange = function (ev) {
var state = pc.iceConnectionState;
console.log("oniceconnectionstatechange: " + state)
};
MyRTC.set_pc(peer_id, {
pc: pc,
sender: sender
});
return pc;
}
function offer_someone(peer_id, peer_name) {
var pc = MyRTC.create_pc(peer_id)
pc.createOffer().then(function (offer) {
ClientOutgoingMessages.offer(peer_id, offer);
pc.setLocalDescription(offer);
});
}
function answer_offer(peer_id) {
var pc = MyRTC.create_pc(peer_id)
var offer = MyOpponents.get_offer(peer_id)
pc.setRemoteDescription(new RTCSessionDescription(offer));
pc.createAnswer().then(function (answer) {
pc.setLocalDescription(answer);
ClientOutgoingMessages.answer(peer_id, answer);
// alert ("rtc established!")
MyStreamer.stream_current();
});
}
handling messages from server
offer: function offer(data) {
if (MyRTC.get_pc(data.connectedUser)) {
// alert("Not accepting offers already have a conn to " + data.connectedUser)
// return;
}
MyOpponents.set_offer(data.connectedUser, data.offer)
},
answer: function answer(data) {
var opc = MyRTC.get_pc(data.connectedUser)
opc && opc.pc.setRemoteDescription(new RTCSessionDescription(data.answer)).catch(function (err) {
console.error(err)
// alert (err)
});
// alert ("rtc established!")
MyStreamer.stream_current();
},
candidate: function candidate(data) {
var opc = MyRTC.get_pc(data.connectedUser)
opc && opc.pc.addIceCandidate(new RTCIceCandidate(data.candidate));
},
leave: function leave(data) {
MyRTC.close_pc(data.connectedUser);
},
I'm running succesfully a client web page that act as a voice message sender, using MediaRecorder APIs:
when the user press any key, start an audio recording,
when the key is released, the audio recording is sent, via soketio, to a server for further processing.
This is a sort of PTT (Push To Talk) user experience, where the user has just to press a key (push) to activate the voice recording. And afterward he must release the key to stop the recording, triggering the message send to the server.
Here a javascript code chunk I used:
navigator.mediaDevices
.getUserMedia({ audio: true })
.then(stream => {
const mediaRecorder = new MediaRecorder(stream)
var audioChunks = []
//
// start and stop recording:
// keyboard (any key) events
//
document
.addEventListener('keydown', () => mediaRecorder.start())
document
.addEventListener('keyup', () => mediaRecorder.stop())
//
// add data chunk to mediarecorder
//
mediaRecorder
.addEventListener('dataavailable', event => {
audioChunks.push(event.data)
})
//
// mediarecorder event stop
// trigger socketio audio message emission.
//
mediaRecorder
.addEventListener('stop', () => {
socket.emit('audioMessage', audioChunks)
audioChunks = []
})
})
Now, What I want is to activate/deactivate the audio(speech) recording not only from a web page button/key/touch, but from an external hardware microphone (with a Push-To-Talk button). More precisely, I want to interface an industrial headset with PTT button on the ear dome, see the photo:
BTW, the PTT button is just a physical button that act as short-circuit toggle switch, as in the photo, just as an example:
By default the microphone is grounded and input signal == 0
When the PTT button is pressed, the micro is activated and input signal != 0.
Now my question is: how can I use Web Audio API to possibly detect when the PTT button is pressed (so audio signal is > 0) to do a mediaRecorder.start() ?
reading here: I guess I have to use the stream returned by mediaDevices.getUserMedia and create an AudioContext() processor:
navigator.mediaDevices.getUserMedia({ audio: true, video: false })
.then(handleSuccess);
const handleSuccess = function(stream) {
const context = new AudioContext();
const source = context.createMediaStreamSource(stream);
const processor = context.createScriptProcessor(1024, 1, 1);
source.connect(processor);
processor.connect(context.destination);
processor.onaudioprocess = function(e) {
// Do something with the data,
console.log(e.inputBuffer);
};
};
But what the processor.onaudioprocess function must do to start (volume > DELTA) and stop (volume < DELTA) the MediaRecorder?
I guess the volume detection could be useful for two situation:
With PTT button, where the user explicitly decide the duration of the speech, pressing and releasing the button
Without the PTT button, in this case the voice message is created with the so called VOX mode (continous audio processing)
Any idea?
I answer to my question just to share a solution I found.
The #cwilso old project: volume-meter seems to be the precise implementation of what #scott-stensland stated in comment above. See the demo: https://webaudiodemos.appspot.com/volume-meter/
UPDATE
BTW, using #cwilso project and #scott-stensland suggestion, I implemented a WeBAD opensource project to solve also my original question:
https://github.com/solyarisoftware/WeBAD
I am able to make video call between two individuals but now what I wanted to do is that I want to add a button to mute and unmute the audio while streaming video I search a lot on the internet but nothing works out for me.
then I found the configuration in options property of WebRtcPeerSendrecv which enables audio on connection but the problem is how I update or toggle it during the stream.
here is my code
var videoInput = document.getElementById('videoInput');
var videoOutput = document.getElementById('videoOutput');
var constraints = {
audio: true, //how do I toggle this during the stream.
video: {
width: 640,
framerate: 15
}
};
var options = {
localVideo: videoInput,
remoteVideo: videoOutput,
onicecandidate : onIceCandidate,
mediaConstraints: constraints
};
var webRtcPeer = kurentoUtils.WebRtcPeer.WebRtcPeerSendrecv(options, function(error) {
if(error) return onError(error)
this.generateOffer(onOffer)
});
I am also free for another alternative which helps me to integrate mute/unmute functionality in my stream.
I am stuck at this so bad for so long any kind of help is appreciable thanks in advance.
I found the solution there is a property in webRtcPeer after making peer connection which allows us to manipulate the video stream audioEnabled which is a boolean what I did is I just change its value to true/false as per my requirement
like this
webRtcPeer.audioEnabled = false //by default it will be false
I'm currently working on audio record through web browser (.wav format) and currently we use the following code to start record, and emit data via socket :
async start () {
this.stream = await navigator.mediaDevices.getUserMedia({
audio: true,
video: false
})
this.audioCtx = new AudioContext()
const source = this.audioCtx.createMediaStreamSource(this.stream)
const scriptProcessor = this.audioCtx.createScriptProcessor(0, 1, 1)
source.connect(scriptProcessor)
scriptProcessor.connect(this.audioCtx.destination)
scriptProcessor.onaudioprocess = event => {
this.emit('record', event.inputBuffer.getChannelData(0))
}
return {
device: this.stream.getAudioTracks().length && this.stream.getAudioTracks()[0].label || 'Unknown',
sampleRate: this.audioCtx.sampleRate
}
Although it seems to work well, it seems that randomly, silent sequences are inserted in the data emitted (~8 consecutive frames, sample). This doesn't seems to be hardware related as we have the same issue regardless of the microphone used.
I would like to know if it's the way we collect the data or how we send them that causes this issue. (and possibly how to fix it)
Thanks,
I want to change from a audio/video stream to a "screensharing" stream:
peerConnection.removeStream(streamA) // __o_j_sep... in Screenshots below
peerConnection.addStream(streamB) // SSTREAM in Screenshots below
streamA is a video/audio stream coming from my camera and microphone.
streamB is the screencapture I get from my extension.
They are both MediaStream objects that look like this:
* 1 Remark
But if I remove streamA from the peerConnection and addStream(streamB) like above nothing seems to happen.
The following works as expected (the stream on both ends is removed and re-added)
peerConnection.removeStream(streamA) // __o_j_sep...
peerConnection.addStream(streamA) // __o_j_sep...
More Details
I have found this example which does "the reverse" (Switch from screen capture to audio/video with camera) but can't spot a significant difference.
The peerConnection RTCPeerConnection object is actually created by this SIPML library source code available here. And I access it like this:
var peerConnection = stack.o_stack.o_layer_dialog.ao_dialogs[1].o_msession_mgr.ao_sessions[0].o_pc
(Yes, this does not look right, but there is no official way to get access to the Peer Connection see discussion here) and here.
Originally I tried to just (ex)change the videoTracks of streamA with the videoTrack of streamB. See question here. It was suggested to me that I should try to renegotiate the Peer Connection (by removing/adding Streams to it), because the addTrack does not trigger a re-negotitation.
I've also asked for help here but the maintainer seems very busy and didn't have a chance to respond yet.
* 1 Remark: Why does streamB not have a videoTracks property? The stream plays in an HTML <video> element and seems to "work". Here is how I get it:
navigator.webkitGetUserMedia({
audio: false,
video: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: streamId,
maxWidth: window.screen.width,
maxHeight: window.screen.height
//, maxFrameRate: 3
}
}
// success callback
}, function(localMediaStream) {
SSTREAM = localMediaStream; //streamB
// fail callback
}, function(error) {
console.log(error);
});
it also seems to have a videoTrack:
I'm running:
OS X 10.9.3
Chrome Version 35.0.1916.153
To answer your first question, when modifying the MediaStream in an active peerconnection, the peerconnection object will fire an onnegotiationneeded event. You need to handle that event and re-exchange your SDPs. The main reason behind this is so that both parties know what streams are being sent between them. When the SDPs are exchanged, the mediaStream ID is included, and if there is a new stream with a new ID(event with all other things being equal), a re-negotiation must take place.
For you second question(about SSTREAM). It does indeed contain video tracks but there is no videotrack attribute for webkitMediaStreams. You can grab tracks via their ID, however.
Since there is the possibility of having numerous tracks for each media type, there is no single attribute for a videotrack or audiotrack but instead an array of such. The .getVideoTracks() call returns an array of the current videoTracks. So, you COULD grab a particular video track through indicating its index .getVideoTracks()[0].
I do something similar, on clicking a button I remove the active stream and add the other.
This is the way I do it and it works for me perfectly,
_this.rtc.localstream.stop();
_this.rtc.pc.removeStream(_this.rtc.localstream);
gotStream = function (localstream_aud){
var constraints_audio={
audio:true
}
_this.rtc.localstream_aud = localstream_aud;
_this.rtc.mediaConstraints= constraints_audio;
_this.rtc.createOffer();
}
getUserMedia(constraints_audio, gotStream);
gotStream = function (localstream){
var constraints_screen={
audio:false,
video:{
mandatory:{
chromeMediaSource: 'screen'
}
}
}
_this.rtc.localstream = localstream;
_this.rtc.mediaConstraints=constraints_video;
_this.rtc.createStream();
_this.rtc.createOffer();
}
getUserMedia(constraints_video, gotStream);
Chrome doesn't allow audio along with the 'screen' so I create a separate stream for it.
You will need to do the opposite in order to switch back to your older video stream or actually to any other stream you want.
Hope this helps