WebRTC both remote and local video is displayed with local stream - javascript

hello i am newbie in WebRTC and i tried this code
const yourVideo = document.querySelector("#face_cam_vid");
const theirVideo = document.querySelector("#thevid");
(async () => {
if (!("mediaDevices" in navigator) || !("RTCPeerConnection" in window)) {
alert("Sorry, your browser does not support WebRTC.");
return;
}
const stream = await navigator.mediaDevices.getUserMedia({video: true,
audio: true});
yourVideo.srcObject = stream;
const configuration = {
iceServers: [{urls: "stun:stun.1.google.com:19302"}]
};
const yours = new RTCPeerConnection(configuration);
const theirs = new RTCPeerConnection(configuration);
for (const track of stream.getTracks()) {
yours.addTrack(track, stream);
}
theirs.ontrack = e => theirVideo.srcObject = e.streams[0];
yours.onicecandidate = e => theirs.addIceCandidate(e.candidate);
theirs.onicecandidate = e => yours.addIceCandidate(e.candidate);
const offer = await yours.createOffer();
await yours.setLocalDescription(offer);
await theirs.setRemoteDescription(offer);
const answer = await theirs.createAnswer();
await theirs.setLocalDescription(answer);
await yours.setRemoteDescription(answer);
})();
and it works but partly https://imgur.com/a/nG7Xif6 . in short i am creating ONE-to-ONE random video chatting like in omegle but this code displays both "remote"(stranger's) and "local"("mine") video with my local stream but all i want is , user wait for second user to have video chat and when third user enters it should wait for fourth and etc. i hope someone will help me with that

You're confusing a local-loop demo—what you have—with a chat room.
A local-loop demo is a short-circuit client-only proof-of-concept, linking two peer connections on the same page to each other. Utterly useless, except to prove the API works and the browser can talk to itself.
It contains localPeerConnection and remotePeerConnection—or pc1 and pc2—and is not how one would typically write WebRTC code. It leaves out signaling.
Signaling is hard to demo without a server, but I show people this tab demo. Right-click and open it in two adjacent windows, and click the Call! button on one to call the other. It uses localSocket, a non-production hack I made using localStorage for signaling.
Just as useless, a tab-demo looks more like real code:
const pc = new RTCPeerConnection();
call.onclick = async () => {
video.srcObject = await navigator.mediaDevices.getUserMedia({video:true, audio:true});
for (const track of video.srcObject.getTracks()) {
pc.addTrack(track, video.srcObject);
}
};
pc.ontrack = e => video.srcObject = e.streams[0];
pc.oniceconnectionstatechange = e => console.log(pc.iceConnectionState);
pc.onicecandidate = ({candidate}) => sc.send({candidate});
pc.onnegotiationneeded = async e => {
await pc.setLocalDescription(await pc.createOffer());
sc.send({sdp: pc.localDescription});
}
const sc = new localSocket();
sc.onmessage = async ({data: {sdp, candidate}}) => {
if (sdp) {
await pc.setRemoteDescription(sdp);
if (sdp.type == "offer") {
await pc.setLocalDescription(await pc.createAnswer());
sc.send({sdp: pc.localDescription});
}
} else if (candidate) await pc.addIceCandidate(candidate);
}
There's a single pc—your half of the call—and there's an onmessage signaling callback to handle the timing-critical asymmetric offer/answer negotiation correctly. Same JS on both sides.
This still isn't a chat-room. For that you need server logic to determine how people meet, and a web socket server for signaling. Try this tutorial on MDN which culminates in a chat demo.

Related

Adding a new Media Stream in mid of a call in WebRTC (peerjs)

I am making a WebRTC video call application using peerjs library and every peer can chose what stream to start the call with, so I have two streams audio and video stream:
const peer = new Peer()
let remotePeerConnection = null
let audioStream;
let videoStream;
try{
audioStream = await navigator.mediaDevices.getUserMedia({audio:true})
}
catch(err){
audioStream = null;
}
try{
videoStream = await navigator.mediaDevices.getUserMedia({video:true})
}
catch(err){
videoStream = null;
}
And then I mix them together
let mixedStream = new MediaStream()
if(audioStream){
mixedStream.addTrack(audioStream.getAudioTracks()[0])
}
if(videoStream){
mixedStream.addTrack(videoStream.getVideoTracks()[0])
}
const myVideo = document.getElementById("my-video")
myVideo.srcObject = mixedStream
myVideo.play()
and I have a call function to call other peer with the mixed stream which can be only video stream or only audio stream or both,
const call = (peerID) => {
const call = peer.call(peerID,mixedStream)
remotePeerConnection = call.peerConnection
call.on("stream",remoteStream=>{
const remoteVideo = document.getElementById("remote-video")
remoteVideo .srcObject = remoteStream
remoteVideo .play()
})
}
the question is if I want to start the call with only audio stream so the mixed stream only has an audio tracks added at the beginning, and then if I want to add video stream to the call How could I do that , to be more clear not replacing an existing stream which I already know, Which kinda looks like this example :
// assume videoTrack is come from what ever
const sender = remotePeerConnection.getSenders().find(s=>s.track.kind === "video")
if(sender){
sender.replaceTrack(videoTrack)
}
I tried adding the video track using addTrack method on the remotePeerConnection
try{
videoStream = await navigator.mediaDevices.getDisplayMedia({video:true})
}
catch(err){
console.log(err)
}
if(videoStream){
const videoTrack = videoStream.getVideoTracks()[0]
remotePeerConnection.addTrack(videoTrack)
}
but it does not seem to work, how could I do that without closing the call and re-call with a new mixed stream.

How do I stream an audio file to nodejs while it's still being recorded?

I am using MediaStream Recording API to record audio in the browser, like this (courtesy https://github.com/bryanjenningz/record-audio):
const recordAudio = () =>
new Promise(async resolve => {
// This wants to be secure. It will throw unless served from https:// or localhost.
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
const mediaRecorder = new MediaRecorder(stream);
let audioChunks = [];
mediaRecorder.addEventListener('dataavailable', event => {
audioChunks.push(event.data);
console.log("Got audioChunk!!", event.data.size, event.data.type);
// mediaRecorder.requestData()
});
const start = () => {
audioChunks = [];
mediaRecorder.start(1000); // milliseconds per recorded chunk
};
const stop = () =>
new Promise(resolve => {
mediaRecorder.addEventListener('stop', () => {
const audioBlob = new Blob(audioChunks, { type: 'audio/mpeg' });
const audioUrl = URL.createObjectURL(audioBlob);
const audio = new Audio(audioUrl);
const play = () => audio.play();
resolve({ audioChunks, audioBlob, audioUrl, play });
});
mediaRecorder.stop();
});
resolve({ start, stop });
});
I would like to modify this code to start streaming to nodejs while it's still recording. I understand the header won't be complete until it finished the recording. I can either account for that on nodejs, or perhaps I can live with invalid headers, because I'll be feeding this into ffmpeg on nodejs anyway. How do I do this?
The trick is when you start your recorder, start it like this mediaRecorder.start(timeSlice), where timeSlice is the number of milliseconds the browser waits before emitting a dataavailable event with a blob of data.
Then, in your event handler for dataavailable you call the server:
mediaRecorder.addEventListener('dataavailable', event => {
myHTTPLibrary.post(event.data);
});
That's the general solution. It's not possible to insert an example here, because a code sandbox can't ask you to use your webcam, but I've created one here. It simply sends your data to Request Bin, where you can watch the data stream in.
There are some other things you'll need to think about if you want to stitch the video or audio back together. The blog post touches on that.

How do I record AND download / upload the webcam stream on server (javascript) WHILE using the webcam input for facial recognition (opencv4nodejs)?

I had to write a program for facial recognition in JavaScript , for which I used the opencv4nodejs API , since there's NOT many working examples ; Now I somehow want to record and save the stream (for saving on the client-side or uploading on the server) alongwith the audio. This is where I am stuck. Any help is appreciated.
In simple words I need to use the Webcam input for multiple purposes , one for facial recognition and two to somehow save , latter is what i'm unable to do. Also in the worst case, If it's not possible Instead of recording and saving the webcam video I could also save the Complete Screen recording , Please Answer if there's a workaround to this.
Below is what i tried to do, But it doesn't work for obvious reasons.
$(document).ready(function () {
run1()
})
let chunks = []
// run1() for uploading model and for facecam
async function run1() {
const MODELS = "/models";
await faceapi.loadSsdMobilenetv1Model(MODELS)
await faceapi.loadFaceLandmarkModel(MODELS)
await faceapi.loadFaceRecognitionModel(MODELS)
var _stream
//Accessing the user webcam
const videoEl = document.getElementById('inputVideo')
navigator.mediaDevices.getUserMedia({
video: true,
audio: true
}).then(
(stream) => {
_stream = stream
recorder = new MediaRecorder(_stream);
recorder.ondataavailable = (e) => {
chunks.push(e.data);
console.log(chunks, i);
if (i == 20) makeLink(); //Trying to make Link from the blob for some i==20
};
videoEl.srcObject = stream
},
(err) => {
console.error(err)
}
)
}
// run2() main recognition code and training
async function run2() {
// wait for the results of mtcnn ,
const input = document.getElementById('inputVideo')
const mtcnnResults = await faceapi.ssdMobilenetv1(input)
// Detect All the faces in the webcam
const fullFaceDescriptions = await faceapi.detectAllFaces(input).withFaceLandmarks().withFaceDescriptors()
// Training the algorithm with given data of the Current Student
const labeledFaceDescriptors = await Promise.all(
CurrentStudent.map(
async function (label) {
// Training the Algorithm with the current students
for (let i = 1; i <= 10; i++) {
// console.log(label);
const imgUrl = `http://localhost:5500/StudentData/${label}/${i}.jpg`
const img = await faceapi.fetchImage(imgUrl)
// detect the face with the highest score in the image and compute it's landmarks and face descriptor
const fullFaceDescription = await faceapi.detectSingleFace(img).withFaceLandmarks().withFaceDescriptor()
if (!fullFaceDescription) {
throw new Error(`no faces detected for ${label}`)
}
const faceDescriptors = [fullFaceDescription.descriptor]
return new faceapi.LabeledFaceDescriptors(label, faceDescriptors)
}
}
)
)
const maxDescriptorDistance = 0.65
const faceMatcher = new faceapi.FaceMatcher(labeledFaceDescriptors, maxDescriptorDistance)
const results = fullFaceDescriptions.map(fd => faceMatcher.findBestMatch(fd.descriptor))
i++;
}
// I somehow want this to work
function makeLink() {
alert("ML")
console.log("IN MAKE LINK");
let blob = new Blob(chunks, {
type: media.type
}),
url = URL.createObjectURL(blob),
li = document.createElement('li'),
mt = document.createElement(media.tag),
hf = document.createElement('a');
mt.controls = true;
mt.src = url;
hf.href = url;
hf.download = `${counter++}${media.ext}`;
hf.innerHTML = `donwload ${hf.download}`;
li.appendChild(mt);
li.appendChild(hf);
ul.appendChild(li);
}
// onPlay(video) function
async function onPlay(videoEl) {
run2()
setTimeout(() => onPlay(videoEl), 50)
}
I'm not familiar with JavaScript. But in general only one program may communicate with the camera. You will probably need to write a server which will read the data from the camera. Then the server will send the data to your facial recognition, recording, etc.

discord.js - music bot is buggy

After around 1-2 minutes playin a song my bot says that the playing be finished whatever the songs length is. here is the link to it: https://github.com/Sheesher/amos
i guess this bug aint be caused due to the code...
const ytdl = require('ytdl-core-discord');
const validUrl = require('valid-url');
let servers = {};
const play = (msg) => {
const args = msg.content.substring(1).split(" ");
const link = args[1];
if (!link) {
return msg.channel.send('You must provide a link.');
};
if (!msg.member.voice.channel) {
return msg.channel.send('You have to be in a voice chat.');
}
if (!validUrl.isUri(link)) {
return msg.channel.send('That aint be link.');
}
if (!servers[msg.guild.id]) servers[msg.guild.id] = {
queque: []
}
let server = servers[msg.guild.id];
server.queque.push(link);
if (!msg.guild.voiceConnection) msg.member.voice.channel.join().then((connection) => {
playSong(connection, link, msg);
})
}
const playSong = async (connection, url, msg) => {
const server = servers[msg.guild.id];
server.dispatcher = connection.play(await ytdl(url), { type: 'opus' });
server.queque.shift();
server.dispatcher.on("end", () => {
if (server.queque[0]) {
playSong(connection, url);
} else {
connection.disconnect();
}
})
server.dispatcher.on('finish', () => log(s('playing finished')))
}```
This is actually any issue many others suffer from as well. It's because of how YTDL-Core (even the discord version) is handled with the streams on YouTube. If it loses connection then it tries to redirect too it, redirect too much and crash or skips the song. Even the music bot I recently created suffer from this but kept it that way for beginners to learn from it. The way to do this is honestly to just not use YTDL-Core. Use something like lavalink which handles all the music playing for you.

How to determine WebRTC dataChannel.send is delivered?

I am trying to write a chat application using WebRTC and I can send the messages over the dataChannel using a code like bellow one:
const peerConnection = new RTCPeerConnection();
const dataChannel =
peerConnection.createDataChannel("myLabel", dataChannelOptions);
dataChannel.onerror = (error) => {
console.log("Data Channel Error:", error);
};
dataChannel.onmessage = (event) => {
console.log("Got Data Channel Message:", event.data);
};
dataChannel.onopen = () => {
dataChannel.send("Hello World!");
};
dataChannel.onclose = () => {
console.log("The Data Channel is Closed");
};
with dataChannel.send() I can send data over the channel correctly. but I am wondering to know, is there any way to determine that the sent message is delivered to another side or not?
This simplest answer is: send a reply.
But you may not need to, if you use an ordered, reliable datachannel (which is the default).
An ordered reliable datachannel
With one of these, you determine a message was sent, by waiting for the bufferedAmount to go down:
const pc1 = new RTCPeerConnection(), pc2 = new RTCPeerConnection();
const channel = pc1.createDataChannel("chat");
chat.onkeypress = async e => {
if (e.keyCode != 13) return;
const before = channel.bufferedAmount;
channel.send(chat.value);
const after = channel.bufferedAmount;
console.log(`Queued ${after - before} bytes`);
channel.bufferedAmountLowThreshold = before; // set floor trigger and wait
await new Promise(r => channel.addEventListener("bufferedamountlow", r));
console.log(`Sent ${after - channel.bufferedAmount} bytes`);
chat.value = "";
};
pc2.ondatachannel = e => e.channel.onmessage = e => console.log(`> ${e.data}`);
pc1.onicecandidate = e => pc2.addIceCandidate(e.candidate);
pc2.onicecandidate = e => pc1.addIceCandidate(e.candidate);
pc1.oniceconnectionstatechange = e => console.log(pc1.iceConnectionState);
pc1.onnegotiationneeded = async e => {
await pc1.setLocalDescription(await pc1.createOffer());
await pc2.setRemoteDescription(pc1.localDescription);
await pc2.setLocalDescription(await pc2.createAnswer());
await pc1.setRemoteDescription(pc2.localDescription);
}
Chat: <input id="chat"><br>
Since the channel is reliable, it won't give up sending this message until it's been received.
Since the channel is ordered, it won't send a second message before this message has been received.
This lets you send a bunch of messages in a row without waiting on a reply. As long as the bufferedAmount keeps goes down, you know it's being sent and received.
In short, to determine a message was received, send a second message, or have the other side send a reply.
An unreliable datachannel
If you're using an unreliable datachannel, then sending a reply is the only way. But since there's no guarantee the reply will make it back, this may produce false negatives, causing duplicate messages on the receiving end.
Unreliable one way, reliable the other
Using the negotiated constructor argument, it's possible to create a datachannel that's unreliable in one direction, yet reliable in the other. This can be used to solve the unreliable reply, to avoid duplicate messages on the receiving (pc2) end.
dc1 = pc1.createDataChannel("chat", {negotiated: true, id: 0, maxRetransmits: 0});
dc2 = pc2.createDataChannel("chat", {negotiated: true, id: 0});

Categories