I am able to make video call between two individuals but now what I wanted to do is that I want to add a button to mute and unmute the audio while streaming video I search a lot on the internet but nothing works out for me.
then I found the configuration in options property of WebRtcPeerSendrecv which enables audio on connection but the problem is how I update or toggle it during the stream.
here is my code
var videoInput = document.getElementById('videoInput');
var videoOutput = document.getElementById('videoOutput');
var constraints = {
audio: true, //how do I toggle this during the stream.
video: {
width: 640,
framerate: 15
}
};
var options = {
localVideo: videoInput,
remoteVideo: videoOutput,
onicecandidate : onIceCandidate,
mediaConstraints: constraints
};
var webRtcPeer = kurentoUtils.WebRtcPeer.WebRtcPeerSendrecv(options, function(error) {
if(error) return onError(error)
this.generateOffer(onOffer)
});
I am also free for another alternative which helps me to integrate mute/unmute functionality in my stream.
I am stuck at this so bad for so long any kind of help is appreciable thanks in advance.
I found the solution there is a property in webRtcPeer after making peer connection which allows us to manipulate the video stream audioEnabled which is a boolean what I did is I just change its value to true/false as per my requirement
like this
webRtcPeer.audioEnabled = false //by default it will be false
Related
I implemented web push notifications. Notification is coming but I want to play custom notification sound that I added but that sound is not working default system window sound is coming I want to play this sound. I added in code to let me know why this notification sound is not playing recive
self.addEventListener('push', async function (event) {
const data = event.data.json();
console.log(data);
const title = 'Sound Notification';
const options = {
sound: '../public/messageNotification.mp3',
};
try {
registration.showNotification(title, options);
} catch (e) {
registration.showNotification(title, options);
}
});
I think you can use HTMLAudioElement for his purpose. For example:
let sound: HTMLAudioElement;
sound = new Audio();
sound.src = '../public/messageNotification.mp3';
sound.load();
sound.play();
We need to take a look what registration.showNotification does, but if it is working correctly and the sound is still not playing, it might be because modern browsers block autoplay in some situations.
For example, Chrome has this autoplay policy. Firefox and Safari might have slightly different policies.
In these cases, you might need to find workarounds for each browser, or you can instruct users to always enable autoplay for your site. In Chrome 104, you do this by clicking on the lock icon (next to the URL), select Site settings, then under Sound select Allow
const song = new Audio("url");
song.play(); // to play the audio
I have a video call application based on WebRTC. It is working as expected. However when call is going on, if I disconnect and connect back audio device (mic + speaker), only speaker part is working. The mic part seems to be not working - the other side can't hear anymore.
Is there any way to inform WebRTC to take audio input again once audio device is connected back?
Is there any way to inform WebRTC to take audio input again once audio device is connected back?
Your question appears simple—the symmetry with speakers is alluring—but once we're dealing with users who have multiple cameras and microphones, it's not that simple: If your user disconnects their bluetooth headset they were using, should you wait for them to reconnect it, or immediately switch to their laptop microphone? If the latter, do you switch back if they reconnect it later? These are application decisions.
The APIs to handle these things are: primarily the ended and devicechange events, and the replaceTrack() method. You may also need the deviceId constraint, and the enumerateDevices() method to a handle multiple devices.
However, to keep things simple, let's take the assumptions in your question at face value to explore the APIs:
When the user unplugs their sole microphone (not their camera) mid-call, our job is to resume conversation with it when they reinsert it, without dropping video:
First, we listen to the ended event to learn when our local audio track drops.
When that happens, we listen for a devicechange event to detect re-insertion (of anything).
When that happens, we could check what changed using enumerateDevices(), or simply try getUserMedia again (microphone only this time).
If that succeeds, use await sender.replaceTrack(newAudioTrack) to send our new audio.
This might look like this:
let sender;
(async () => {
try {
const stream = await navigator.mediaDevices.getUserMedia({video: true, audio: true});
pc.addTrack(stream.getVideoTracks()[0], stream);
sender = pc.addTrack(stream.getAudioTracks()[0], stream);
sender.track.onended = () => navigator.mediaDevices.ondevicechange = tryAgain;
} catch (e) {
console.log(e);
}
})();
async function tryAgain() {
try {
const stream = await navigator.mediaDevices.getUserMedia({audio: true});
await sender.replaceTrack(stream.getAudioTracks()[0]);
navigator.mediaDevices.ondevicechange = null;
sender.track.onended = () => navigator.mediaDevices.ondevicechange = tryAgain;
} catch (e) {
if (e.name == "NotFoundError") return;
console.log(e);
}
}
// Your usual WebRTC negotiation code goes here
The above is for illustration only. I'm sure there are lots of corner cases to consider.
I'm currently working on audio record through web browser (.wav format) and currently we use the following code to start record, and emit data via socket :
async start () {
this.stream = await navigator.mediaDevices.getUserMedia({
audio: true,
video: false
})
this.audioCtx = new AudioContext()
const source = this.audioCtx.createMediaStreamSource(this.stream)
const scriptProcessor = this.audioCtx.createScriptProcessor(0, 1, 1)
source.connect(scriptProcessor)
scriptProcessor.connect(this.audioCtx.destination)
scriptProcessor.onaudioprocess = event => {
this.emit('record', event.inputBuffer.getChannelData(0))
}
return {
device: this.stream.getAudioTracks().length && this.stream.getAudioTracks()[0].label || 'Unknown',
sampleRate: this.audioCtx.sampleRate
}
Although it seems to work well, it seems that randomly, silent sequences are inserted in the data emitted (~8 consecutive frames, sample). This doesn't seems to be hardware related as we have the same issue regardless of the microphone used.
I would like to know if it's the way we collect the data or how we send them that causes this issue. (and possibly how to fix it)
Thanks,
I am using SimpleWebRTC library found here: https://simplewebrtc.com
I got the signal-master running which has STUN/TURN configured properly. It's capable of detecting other peers so I assume STUN/TURN is functional. My problem is that when a peer starts their local video, other peers are not discovering it UNLESS they reload the page. I want it so it is automatically pushed to other peers without the need of reloading the page. I think it has to do with the code below (which I took from the example) but I am not sure.
The reason I have autoRequestMedia to false is because I want users to be able to view the other peers' cameras without having to turn their own devices on (also why I don't have webrtc.joinRoom in the readyToCall event).
Currently, users click on a button and it will trigger the startLocalVideo(); and the video is created in the element. Problem is nothing gets pushed to other peers unless the other peers reload the page. Hope that explains it all, let me know if you need more details.
var webrtc = new SimpleWebRTC({
// the id/element dom element that will hold "our" video
localVideoEl: 'localCam',
// the id/element dom element that will hold remote videos
remoteVideosEl: '',
// immediately ask for camera access
autoRequestMedia: false,
autoRemoveVideos: true,
url: 'MY SIGNAL-MASTER URL HERE',
localVideo: {
autoplay: true, // automatically play the video stream on the page
mirror: false, // flip the local video to mirror mode (for UX)
muted: true // mute local video stream to prevent echo
}
});
webrtc.joinRoom('testchannel');
// a peer video has been added
webrtc.on('videoAdded', function (video, peer) {
console.log('video added', peer);
var remotes = document.getElementById('remoteCams');
if (remotes) {
var container = document.createElement('div');
container.className = 'videoContainer';
container.id = 'container_' + webrtc.getDomId(peer);
container.appendChild(video);
// suppress contextmenu
// video.oncontextmenu = function () { return false; };
remotes.appendChild(container);
}
});
// a peer video was removed
webrtc.on('videoRemoved', function (video, peer) {
console.log('video removed ', peer.nick);
var remotes = document.getElementById('remoteCams');
var el = document.getElementById(peer ? 'container_' + webrtc.getDomId(peer) : 'localScreenContainer');
if (remotes && el) {
remotes.removeChild(el);
}
});
You have to put the join statement into the readyToCall listener :
webrtc.on('readyToCall', function() {
webrtc.joinRoom('roomname');
})
or
put the joinRoom call in a setTimout function.
I want to change from a audio/video stream to a "screensharing" stream:
peerConnection.removeStream(streamA) // __o_j_sep... in Screenshots below
peerConnection.addStream(streamB) // SSTREAM in Screenshots below
streamA is a video/audio stream coming from my camera and microphone.
streamB is the screencapture I get from my extension.
They are both MediaStream objects that look like this:
* 1 Remark
But if I remove streamA from the peerConnection and addStream(streamB) like above nothing seems to happen.
The following works as expected (the stream on both ends is removed and re-added)
peerConnection.removeStream(streamA) // __o_j_sep...
peerConnection.addStream(streamA) // __o_j_sep...
More Details
I have found this example which does "the reverse" (Switch from screen capture to audio/video with camera) but can't spot a significant difference.
The peerConnection RTCPeerConnection object is actually created by this SIPML library source code available here. And I access it like this:
var peerConnection = stack.o_stack.o_layer_dialog.ao_dialogs[1].o_msession_mgr.ao_sessions[0].o_pc
(Yes, this does not look right, but there is no official way to get access to the Peer Connection see discussion here) and here.
Originally I tried to just (ex)change the videoTracks of streamA with the videoTrack of streamB. See question here. It was suggested to me that I should try to renegotiate the Peer Connection (by removing/adding Streams to it), because the addTrack does not trigger a re-negotitation.
I've also asked for help here but the maintainer seems very busy and didn't have a chance to respond yet.
* 1 Remark: Why does streamB not have a videoTracks property? The stream plays in an HTML <video> element and seems to "work". Here is how I get it:
navigator.webkitGetUserMedia({
audio: false,
video: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: streamId,
maxWidth: window.screen.width,
maxHeight: window.screen.height
//, maxFrameRate: 3
}
}
// success callback
}, function(localMediaStream) {
SSTREAM = localMediaStream; //streamB
// fail callback
}, function(error) {
console.log(error);
});
it also seems to have a videoTrack:
I'm running:
OS X 10.9.3
Chrome Version 35.0.1916.153
To answer your first question, when modifying the MediaStream in an active peerconnection, the peerconnection object will fire an onnegotiationneeded event. You need to handle that event and re-exchange your SDPs. The main reason behind this is so that both parties know what streams are being sent between them. When the SDPs are exchanged, the mediaStream ID is included, and if there is a new stream with a new ID(event with all other things being equal), a re-negotiation must take place.
For you second question(about SSTREAM). It does indeed contain video tracks but there is no videotrack attribute for webkitMediaStreams. You can grab tracks via their ID, however.
Since there is the possibility of having numerous tracks for each media type, there is no single attribute for a videotrack or audiotrack but instead an array of such. The .getVideoTracks() call returns an array of the current videoTracks. So, you COULD grab a particular video track through indicating its index .getVideoTracks()[0].
I do something similar, on clicking a button I remove the active stream and add the other.
This is the way I do it and it works for me perfectly,
_this.rtc.localstream.stop();
_this.rtc.pc.removeStream(_this.rtc.localstream);
gotStream = function (localstream_aud){
var constraints_audio={
audio:true
}
_this.rtc.localstream_aud = localstream_aud;
_this.rtc.mediaConstraints= constraints_audio;
_this.rtc.createOffer();
}
getUserMedia(constraints_audio, gotStream);
gotStream = function (localstream){
var constraints_screen={
audio:false,
video:{
mandatory:{
chromeMediaSource: 'screen'
}
}
}
_this.rtc.localstream = localstream;
_this.rtc.mediaConstraints=constraints_video;
_this.rtc.createStream();
_this.rtc.createOffer();
}
getUserMedia(constraints_video, gotStream);
Chrome doesn't allow audio along with the 'screen' so I create a separate stream for it.
You will need to do the opposite in order to switch back to your older video stream or actually to any other stream you want.
Hope this helps