How change streams in peerJS - javascript

I use peerJs.
I need to switch the stream from the camera to the screen share during the broadcast. Can I change the streams?

With peerjs we can change/switch both video and audio tracks as below
peer.call('other-id',stream) on success returns an object containing peerConnection as below
let call = peer.call('other-id',stream)
call.peerConnection //object
this peerConnection object contains a function getSenders() which when executed returns an array of two tracks first one is audio second is video
then you can call replaceTrack(newTrack) as below am replacing audio track
call.peerConnection.getSenders()[0].replaceTrack(newTrack)

Changing stream is possible by using replaceTrack.
maybe This answer help

Here is how I replace from camera to screen sharing in PeerJS with ReactJS.
function replaceStream() {
navigator.mediaDevices.getDisplayMedia(device).then((stream) => {
callSate.peerConnection.getSenders().forEach((sender) => {
if(sender.track.kind === "audio" && stream.getAudioTracks().length > 0){
sender.replaceTrack(stream.getAudioTracks()[0]);
}
if (sender.track.kind === "video" && stream.getVideoTracks().length > 0) {
sender.replaceTrack(stream.getVideoTracks()[0]);
}
});
videoRef.current.srcObject = stream;
videoRef.current.play();
});
}
Here callSate is state/variable holds the value of call object which is returned when we make call or answer call of peer.
You can change getDisplayMedia to getUserMedia according to your need.
VideoRef holds the reference to video displayed on frontend.

Related

Creating and removing `<audio>` tags via javascript (possible scope issue)

I am a neophyte JS developer with a past in server-side programming.
I am creating a simple web app that allows various users to engage in live audio chatting with one another. Whenever a new user logs into an audio chat room, the following ensures they can hear everyone talking
// plays remote streams
async function playStreams(streamList) {
await Promise.all(streamList.map(async (item, index) => {
// add an audio streaming unit, and play it
var audio = document.createElement('audio');
audio.addEventListener("loadeddata", function() {
audio.play();
});
audio.srcObject = item.remoteStream;
audio.id = 'audio-stream-'+item.streamID;
audio.muted = false;
}));
}
Essentially I pass a list of streams into that function and play all of them.
Now if a user leaves the environment, I feel the prudent thing to do is to destroy their <audio> element.
To achieve that, I tried
function stopStreams(streamList) {
streamList.forEach(function (item, index) {
let stream_id = item.streamID;
let audio_elem = document.getElementById('audio-stream-'+stream_id);
if (audio_elem) {
audio_elem.stop();
}
});
}
Unfortunately, audio_elem is always null in the function above. It is not that the streamIDs are mismatched - I have checked them.
Maybe this issue has to do with scoping? I am guessing the <audio> elements created within playStreams are scoped within that function, and thus stopStreams is unable to access them.
I need a domain expert to clarify whether this is actually the case. Moreover, I also need a solution regarding how to better handle this situation - one that cleans up successfully after itself.
p.s. a similar SO question came close to asking the same thing. But their case was not numerous <audio> elements being dynamically created and destroyed as users come and go. I do not know how to use that answer to solve my issue. My concepts are unclear.
I created a global dictionary like so -
const liveStreams = {};
Next, when I play live streams, I save all the <audio> elements in the aforementioned global dictionary -
// plays remote streams
async function playStreams(streamList) {
await Promise.all(streamList.map(async (item, index) => {
// add an audio streaming unit, and play it
var audio = document.createElement('audio');
audio.addEventListener("loadeddata", function() {
audio.play();
});
audio.srcObject = item.remoteStream;
audio.muted = false;
// log the audio object in a global dictionary
liveStreams[stream_id] = audio;
}));
}
I destroy the streams via accessing them from the liveStreams dictionary, like so -
function stopStreams(streamList) {
streamList.forEach(function (item, index) {
let stream_id = item.streamID;
// Check if liveStreams contains the audio element associated to stream_id
if (liveStreams.hasOwnProperty(stream_id)) {
let audio_elem = liveStreams[stream_id];
// Stop the playback
audio_elem.pause();// now the object becomes subject to garbage collection.
// Remove audio obj's ref from dictionary
delete liveStreams.stream_id;
}
});
}
And that does it.

Is there a way to determine if a clip is audio or video as a project item?

I'm coding a Premiere plugin using the SDK provided by Adobe. I want my function to be able to be sensitive to whether the media is audio only or video (with or without audio), e.g. whether it's a .wav or a .mp4. I want this to happen before any clips are on any timelines, so I can't use the track.mediaType attribute.
I am trying to do this when the media is a project item but am not finding anything in the documentation (https://premiere-scripting-guide.readthedocs.io/4%20-%20Project%20Item%20object/projectItem.html?highlight=mediaType)
For now, this is what I'm doing:
GetProjectItemType: function (projectItem){
if (projectItem.name.includes("wav") || projectItem.name.includes("mp3") || projectItem.name.includes("AIFF") )
return "Audio";
else
return "Video";
}
There is a function that you can use referenced in the Adobe-CEP/Samples
projectItem.type
https://github.com/Adobe-CEP/Samples/blob/f86975c3689e29df03e7d815c3bb874045b7f991/PProPanel/jsx/PPRO/Premiere.jsx#L1614
ex.
if ((projectItem.type === ProjectItemType.CLIP) || (projectItem.type === ProjectItemType.FILE)) {
}
this can help you differentiate between other projectItems such as Bins, Clips and Files and you can use this in combination with your current implementation to ensure you have either audio or video projectItem and not bin

How to do screen sharing with simple-peer webRTC SDK

I'm trying to implement webrtc & simple peer to my chat. Everything works but I would like to add screen sharing option. For that I tried that:
$("#callScreenShare").click(async function(){
if(captureStream != null){
p.removeStream(captureStream)
p.addStream(videoStream)
captureStreamTrack.stop()
captureStreamTrack =captureStream= null
$("#callVideo")[0].srcObject = videoStream
$(this).text("screen_share")
}else{
captureStream = await navigator.mediaDevices.getDisplayMedia({video:true, audio:true})
captureStreamTrack = captureStream.getTracks()[0]
$("#callVideo")[0].srcObject = captureStream
p.removeStream(videoStream)
console.log(p)
p.addStream(captureStream)
$(this).text("stop_screen_share")
}
})
But I stop the camera and after that doesn't do anything and my video stream on my peer's computer is blocked. No errors, nothing only that.
I've put a console.log when the event stream is fired. The first time it fires but when I call the addStream method, it doesn't
If someone could help me it would be really helpful.
What I do is replacing the track. So instead of removing and adding the stream:
p.streams[0].getVideoTracks()[0].stop()
p.streams[0].replaceTrack(p.streams[0].getVideoTracks()[0], captureStreamTrack, p.streams[0])
This will replace the video track from the stream with the one of the display.
simple-peer docs
The below function will do the trick. Simply call the replaceTrack function, passing it the new track and the remote peer instance.
function replaceTrack(stream, recipientPeer ) {
recipientPeer.replaceTrack(
recipientPeer.streams[0].getVideoTracks()[0],
stream,
recipientPeer.streams[0]
)
}

Mute audio created in iframe with Audio object

I am working with an iframe that contains code that we receive from a third party. This third party code contains a Canvas and contains a game created using Phaser.
I am looking for a way to mute the sound that this game does at some point.
We usually do it this way:
function mute(node) {
// search for audio elements within the iframe
// for each audio element,(video, audio) attempt to mute it
const videoEls = node.getElementsByTagName('video');
for (let i = 0; i < videoEls.length; i += 1) {
videoEls[i].muted = true;
}
const audioEls = node.getElementsByTagName('audio');
for (let j = 0; j < audioEls.length; j += 1) {
audioEls[j].muted = true;
}
}
After some research I found out that you can play sound in a web page using new Audio([url]) and then call the play method on the created object.
The issue with the mute function that we use is that, if the sound is created with new Audio([url]), it does not pick it up.
Is there a way from the container to list all the Audio elements that have been created within a document or is it just impossible, and that creates a way to play audio without the possibility for iframe container to mute it?
No, there is no way.
Not only can they use non appended <audio> elements like you guessed, but they can also use the Web Audio API (which I think phaser does) and for neither you have a way of accessing it from outside if they didn't expose such an option.
Your best move would be to ask the developer of this game that it exposes an API where you would be able to control this.
For instance, it could be some query-parameter in the URL ( https://thegame.url?muted=true) or even an API based on the Message API, where you'd be able to do iframe.contentWindow.postMessage({muted: true}) from your own page.

How to use RTCPeerConnection.removeTrack() to remove video or audio or both?

I'm studying WebRTC and try to figure how it works.
I modified this sample on WebRTC.github.io to make getUserMedia source of leftVideo and streaming to rightVideo.It works.
And I want to add some feature, like when I press pause on leftVideo(My browser is Chrome 69)
I change apart of Call()
...
stream.getTracks().forEach(track => {
pc1Senders.push(pc1.addTrack(track, stream));
});
...
And add function on leftVideo
leftVideo.onpause = () => {
pc1Senders.map(sender => pc1.removeTrack(sender));
}
I don't want to close the connection, I just want to turn off only video or audio.
But after I pause leftVideo, the rightVideo still gets track.
Am I doing wrong here, or maybe other place?
Thanks for your helping.
First, you need to get the stream of the peer. You can mute/hide the stream using the enabled attribute of the MediaStreamTrack. Use the below code snippet toggle media.
/* stream: MediaStream, type:trackType('audio'/'video') */
toggleTrack(stream,type) {
stream.getTracks().forEach((track) => {
if (track.kind === type) {
track.enabled = !track.enabled;
}
});
}
const senders = pc.getSenders();
senders.forEach((sender) => pc.removeTrack(sender));
newTracks.forEach((tr) => pc.addTrack(tr));
Get all the senders;
Loop Through and remove each sending track;
Add new tracks (if so desired);
Edit: or, if you won't need renegotiation (conditions listed below), use replaceTrack (https://developer.mozilla.org/en-US/docs/Web/API/RTCRtpSender/replaceTrack).
Not all track replacements require renegotiation. In fact, even
changes that seem huge can be done without requiring negotation. Here
are the changes that can trigger negotiaton:
The new track has a resolution which is outside the bounds of the
bounds of the current track; that is, the new track is either wider or
taller than the current one.
The new track's frame rate is high enough
to cause the codec's block rate to be exceeded. The new track is a
video track and its raw or pre-encoded state differs from that of the
original track.
The new track is an audio track with a different
number of channels from the original.
Media sources that have built-in
encoders — such as hardware encoders — may not be able to provide the
negotiated codec. Software sources may not implement the negotiated
codec.
async switchMicrophone(on) {
if (on) {
console.log("Turning on microphone");
const stream = await navigator.mediaDevices.getUserMedia({audio: true});
this.localAudioTrack = stream.getAudioTracks()[0];
const audioSender = this.peerConnection.getSenders().find(e => e.track?.kind === 'audio');
if (audioSender == null) {
console.log("Initiating audio sender");
this.peerConnection.addTrack(this.localAudioTrack); // will create sender, streamless track must be handled on another side here
} else {
console.log("Updating audio sender");
await audioSender.replaceTrack(this.localAudioTrack); // replaceTrack will do it gently, no new negotiation will be triggered
}
} else {
console.log("Turning off microphone");
this.localAudioTrack.stop(); // this will turn off mic and make sure you don't have active air-on indicator
}
}
This is simplified code. Solves most of the issues described in this topic.

Categories