I am currently working for WebRTC multipeer connection. I want to implement feature to switch camera from front to back while on call.
This is the code I am using to switch cameras
async function changevideo() {
const audioSource = audioInputSelect.value;
const videoSource = videoSelect.options[videoSelect.selectedIndex].value;
var tempconstraints ={
video: {
deviceId: videoSource ? { exact: videoSource } : undefined,
width: { max: 320 },
height: { max: 240 }
},
audio: { deviceId: audioSource ? { exact: audioSource } : undefined },
};
var newstream = await navigator.mediaDevices.getUserMedia(tempconstraints);
if (connections[socketId]) {
Promise.all(connections[socketId].getSenders().map(function (sender) {
debugger;
return sender.replaceTrack(newstream.getTracks().find(function (track) {
debugger;
return track.kind === sender.track.kind;
})).then(data =>
{
console.log(data);
});;
}));
var track = localStream.getTracks().find(function (track) { return track.kind == videoTrack.kind });
localStream.removeTrack(track);
localStream.addTrack(videoTrack);
connections[tempsocketid].onnegotiationneeded = function () {
connections[tempsocketid].createOffer().then(function (offer) {
return connections[tempsocketid].setLocalDescription(offer);
}).then(function () {
socket.emit('signal', socketId, JSON.stringify({ 'sdp': connections[tempsocketid].localDescription, 'room': roomNumber }), roomNumber);
}).catch(e => console.log(e));
}
}
}
Here connections contains the RTCpeerconnection details of all type of connections connected.
socketId is the id of main user on which I want to switch camera. So, connections[socketId] gives me the RTCPeerConnection details of user with socketId.
newstream is the stream after switching camera.
If I directly update src of video to newstream then my camera changes only on my device.
I have searched alot but everywhere I am getting solution to use replaceTrack but it is not wokring in my case. Everytime I use it nothing happens on screen and I am also not getting any error in console.
Update
I have used the onnegotiationneeded with remove and add track.
tempsocketid is the socketId of another user who is connected.
So I have 2 users one have socketid stored in socketId and another having socketid stored in tempsocketid. So currently I am trying to switch camera of user with socketid socketId
and when negotiation is called then I am getting error in another users console.
DOMException: Failed to execute 'addIceCandidate' on 'RTCPeerConnection': Error processing ICE candidate
You are probably being unable to cause a renegotiaton so change to the facingMode of your camera cannot affect the other peers, but you do not use this explicitely as I see but replaceTracks. But still you may not being able to trigger a renegotiation. Checkout things that cause renegotiation: https://developer.mozilla.org/en-US/docs/Web/API/RTCRtpSender/replaceTrack#Usage_notes
Changing facingMode setting by applying constraints with applyConstraints may be a solution without using replaceTracks.
A strange idea coming to my mind to dispatch negotiationneeded event yourself, but I would try this after chasing the reason not being able to casue renegotiation itself by replacing track, or changing camera, and everything else.
About another reason: As for the reasons that casues renegotiation, your back camera resolution is most probably higher than the front camera, so it looks like a reason. If you start from the back camera first and then switch to front it might have been a no reason. I am suspicious about your max constraints of width nad height. They might be very lower for the both camera hence resulting in the same size, resoltion and so a no reason according to the list in the linked page above. I suggest removing them.
Also the replaceTracks returns a promise but the map function it is called from does not return anything, hence undefined. You are supposed to use the promises inside the array argument to Promise.all. I would suggest return those promises inside the map function.
I have fixed the issue the problem was with socketId I was sending socketId of current user with different user.
As replace track was not working so I have used removeTrack and addTrack to force negotiation.
Here is my working code
if (connections[socketId]) {
localStream.getVideoTracks()[0].enabled = false;
var track = localStream.getTracks().find(function (track) { return track.kind == videoTrack.kind });
localStream.removeTrack(track);
localStream.addTrack(videoTrack);
connections[tempsocketid].onnegotiationneeded = function () {
console.log('negotiationstarted');
connections[tempsocketid].createOffer().then(function (offer) {
return connections[tempsocketid].setLocalDescription(offer);
}).then(function () {
console.log('negotiation signal sent');
socket.emit('signal', tempsocketid, JSON.stringify({ 'sdp': connections[tempsocketid].localDescription, 'room': roomNumber }), roomNumber);
}).catch(e => console.log(e));
}
localStream.getVideoTracks()[0].enabled = true;
}
Related
I want to pass the watch time of a video the user has seen when user closes the page,reload the page or navigate to another page. I am using visibilityChange event for this. When i try to navigate to another page, the api call runs perfectly. But the data i am sending to the api is not updated correctly. I am going to provide the code and the output below so you can understand perfectly what my problem is.
useEffect(async () => {
const x = 0;
console.log("use effect is run number ::::", x + 1);
window.addEventListener("visibilitychange", sendViewTime);
return () => {
window.removeEventListener("visibilitychange", sendViewTime);
};
}, []);
I have added the event listener in the useEffect.
the sendViewTime method is the method i want to call on visibility change event. This Method is working perfectly but for some reason the params are not updated even though i have set their states in their relavant hooks.
const sendViewTime = async () => {
if (document.visibilityState === "hidden") {
console.log("the document is hidden");
const value = localStorage.getItem("jwt");
const initialValue = JSON.parse(value);
console.log("the send View Time is :::", played_time);
const params = {
video_id: video_id_url,
viewTime: played_time,
MET: MET_value,
weight: "",
};
console.log("params are :::", params);
await setEffort(params, initialValue).then((res) => {
console.log("set effort api response is ::: ", res);
});
} else {
console.log("the document is back online");
}
};
//This onProgress prop is from react Player. Here i am updating the state of video progress.
onProgress={(time) => {
console.log("the time is :::", time);
const time_1 = Math.round(time.playedSeconds);
const time_2 = JSON.stringify(time_1);
setPlayed_time(time_2);
console.log("the played time is :::", played_time);
}}
//OUTPUT
// the document is hidden.
// the send View Time is :::
//params are ::: {video_id: '23', viewTime: '', MET: undefined, weight: ''}
//set effort api response is ::: {status: 200, message: 'Success', data: {…}, time: '2.743 s'}
//the document is back online
Never mind guys. I found the solution. It seems that i have to pass played_time and met value as a prop to the useEffect.If you want to know how useEffect works please visit this link. In general is it better to use one or many useEffect hooks in a single component?.
it's been quite a long time since I've posted here. Just wanted to bounce this off of you as it has been making my brain hurt. So, I have been developing a real time video chat app with WebRTC. Now, I know that the obligatory "it's somewhere in the network stack (NAT)" answer always applies.
As is always the case it seems with WebRTC, it works perfectly in my browser and on my laptop between tabs or between Safari/Chrome. However, over the internet on HTTPS on a site I've created, it is shotty at best. It can accept and display the media stream from my iPhone but it cannot receive the media stream from my laptop. It just shows a black square on the iPhone for the remote video.
Any pointers would be most appreciate though as I've been going crazy. I know that TURN servers are an inevitable aspect of WebRTC but I'm trying to avoid employing that.
So, here is my Session class which handles essentially all the WebRTC related client side session logic:
(The publish method is just an inherited member that emulates EventTarget/EventEmitter functionality and the p2p config is just for Google's public STUN servers)
class Session extends Notifier {
constructor(app) {
super()
this.app = app
this.client = this.app.client
this.clientSocket = this.client.socket
this.p2p = new RTCPeerConnection(this.app.config.p2pConfig)
this.closed = false
this.initialize()
}
log(message) {
if (this.closed) return
console.log(`[${Date.now()}] {Session} ${message}`)
}
logEvent(event, message) {
let output = event
if (message) output += `: ${message}`
this.log(output)
}
signal(family, data) {
if (this.closed) return
if (! data) return
let msg = {}
msg[family] = data
this.clientSocket.emit("signal", msg)
}
initialize() {
this.p2p.addEventListener("track", async event => {
if (this.closed) return
try {
const [remoteStream] = event.streams
this.app.mediaManager.remoteVideoElement.srcObject = remoteStream
} catch (e) {
this.logEvent("Failed adding track", `${e}`)
this.close()
}
})
this.p2p.addEventListener("icecandidate", event => {
if (this.closed) return
if (! event.candidate) return
this.signal("candidate", event.candidate)
this.logEvent("Candidate", "Sent")
})
this.p2p.addEventListener("connectionstatechange", event => {
if (this.closed) return
switch (this.p2p.connectionState) {
case "connected":
this.publish("opened")
this.logEvent("Opened")
break
// A fail safe to ensure that faulty connections
// are terminated abruptly
case "disconnected":
case "closed":
case "failed":
this.close()
break
default:
break
}
})
this.clientSocket.on("initiate", async () => {
if (this.closed) return
try {
const offer = await this.p2p.createOffer()
await this.p2p.setLocalDescription(offer)
this.signal("offer", offer)
this.logEvent("Offer", "Sent")
} catch (e) {
this.logEvent("Uninitiated", `${e}`)
this.close()
}
})
this.clientSocket.on("signal", async data => {
if (this.closed) return
try {
if (data.offer) {
this.p2p.setRemoteDescription(new RTCSessionDescription(data.offer))
this.logEvent("Offer", "Received")
const answer = await this.p2p.createAnswer()
await this.p2p.setLocalDescription(answer)
this.signal("answer", answer)
this.logEvent("Answer", "Sent")
}
if (data.answer) {
const remoteDescription = new RTCSessionDescription(data.answer)
await this.p2p.setRemoteDescription(remoteDescription)
this.logEvent("Answer", "Received")
}
if (data.candidate) {
try {
await this.p2p.addIceCandidate(data.candidate)
this.logEvent("Candidate", "Added")
} catch (e) {
this.logEvent("Candidate", `Failed => ${e}`)
}
}
} catch (e) {
this.logEvent("Signal Failed", `${e}`)
this.close()
}
})
this.app.mediaManager.localStream.getTracks().forEach(track => {
this.p2p.addTrack(track, this.app.mediaManager.localStream)
})
}
close() {
if (this.closed) return
this.p2p.close()
this.app.client.unmatch()
this.logEvent("Closed")
this.closed = true
}
}
I've worked with WebRTC well over a little while now and am deploying a production level website for many-to-many broadcasts so I can happily help you with this answer but don't hit me as I'm about to spoil some of your fun.
The Session Description Protocol you generate would had contained the send/recv IPs of both connecting users. Now because none of you are actually port-forwarded to allow this connection or to act as a host, a TURN would be in fact required to mitigate this issue. For security reasons it's like this and most users will require a TURN if you decide to go this route.
You can skip a TURN server completely but still requiring a server, you'd go the route of sending/receiving RTP and routing it like an MCU/SFU.
These solutions are designed to take in WebRTC produced tracks and output them to many viewers (consumers).
Here's a SFU I use that works great for many-to-many if you can code it. It's Node.JS friendly if you don't know other languages outside JavaScript.
https://mediasoup.org/
I have had a nightmare of a time attempting to stop the video tracks and turn off the camera. Can ANYONE tell me what I am missing here? Snippet below is the event handler for when a room is disconnected. The code executes fine, but the camera stays on. Thanks in advance.
this.roomObj.once('disconnected', (room: Room, error) => {
// if (error) {
// console.log(`An error has occurred with the room connection: ${error}`);
// }
room.localParticipant.tracks.forEach(publication => {
publication.track.stop();
const attachedElements = publication.track.detach();
attachedElements.forEach(element => {
element.stop();
element.remove();
});
room.localParticipant.videoTracks.forEach(video => {
const trackConst = [video][0].track;
trackConst.stop(); // <- error
trackConst.detach().forEach(element => {
element.stop();
element.remove();
});
room.localParticipant.unpublishTrack(trackConst);
});
let element = this.remoteVideo1Container.nativeElement;
while (element.firstChild) {
element.removeChild(element.firstChild);
}
let localElement = this.localVideo.nativeElement;
while (localElement.firstChild) {
localElement.removeChild(localElement.firstChild);
}
//this.router.navigate([‘thanks’]);
});
}, (error) => {
alert(error.message);
});
I triggered a full page redirect to shut off the camera with window.location.replace rather than using react-router-dom's <Redirect to={} /> to fully shut off the camera. This may not be possible in your case, and there maybe a better solution.
If you could use the same route that you were going to anyway. If you weren't planning on changing page, though my solution won't be of much use.
But having an issue with shutting off the desktop camera light in react, I figured I would share my solution. I can delete or remove this post if this answer displeases anyone or once a Twilio rep (or some mystery person) comes by with a better response.
I am going to make screen sharing function using webRTC.
My code is working well when video calling
But in audio call status, that is not working.
Here is my code.
This is for create peer Connection and add stream for audio calling
const senders = [];
var mediaConstraints = {audio: true, video: false}
navigator.mediaDevices.getUserMedia(mediaConstraints)
.then(function (localStream) {
localLiveStream = localStream;
document.getElementById("local_video").srcObject = localLiveStream;
localLiveStream.getTracks().forEach(track => senders.push(myPeerConnection.addTrack(track, localLiveStream)));
})
.catch(handleGetUserMediaError);
when screen share field
mediaConstraints.video = true;
let displayStream = await navigator.mediaDevices.getDisplayMedia(mediaConstraints)
if (displayStream) {
document.getElementById("local_video").srcObject = displayStream;
console.log("senders: ", senders);
try {
senders.find(sender => sender.track.kind === 'video').replaceTrack(displayStream.getTracks()[0]);
} catch (e) {
console.log("Error: ", e)
}
}
In screen sharing status, sender.track.kind is "audio"
So
senders.find(sender => sender.track.kind === 'video') = null.
As this, replaceTrack makes error
is there any other way for screen share?
You need to add a video track in order to achieve this. It will require renegotiation.
So add the screen track (not replace) to the connection and then create the offer again!
connection.addTrack(screenVideoTrack);
Check this for reference:
https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection/onnegotiationneeded
I have an application which scans 2D barcodes then retrieves data from the URLs provided by the codes. In the event that the user loses connection to the internet, the application begins to store the URLs via AsyncStorage. The issue is, I need to implement a listener that upon regaining an internet connection, the application begins a given method. Are there any recommended ways to go about implementing a connection listener such as this?
Edit:
I have tried using a NetInfo EventListener however I am not sure if I'm using it incorrectly, as it always calls the passed function, even when the internet status hasn't changed.
_connectionHandler = (e) => {
this.setState({ cameraActive: false })
NetInfo.getConnectionInfo().then((connectionInfo) => {
if (connectionInfo.type === "none"){
console.log("No internet")
dataArray.push(e.data)
let barcodeData_delta = {
data: dataArray
}
AsyncStorage.mergeItem(STORAGE_KEY, JSON.stringify(barcodeData_delta));
NetInfo.isConnected.addEventListener(
'connectionChange',
this._handleConnectionChange(e.data)
);
this.setState({ cameraActive: true })
} else {
console.log("Internet available -> Going to read barcode now")
this._handleBarCodeRead(e.data);
}
})
}
React Native has a NetInfo documentation, there you can see how to add a listener his connection changes, and do what you want when its called.
Add a Handler to isConnected property
NetInfo.isConnected.addEventListener(
'connectionChange',
_connectionHandler
);
A function that handles the change, just adjust your setState with the camera, I couldn't figure out when to call it.
_connectionHandler = (isConnected) => {
this.setState({ cameraActive: false })
if (!isConnected){
console.log("No internet")
dataArray.push(e.data)
let barcodeData_delta = {
data: dataArray
}
AsyncStorage.mergeItem(STORAGE_KEY, JSON.stringify(barcodeData_delta));
this.setState({ cameraActive: true })
} else {
console.log("Internet available -> Going to read barcode now")
this._handleBarCodeRead(e.data);
}
})
}