everything just works fine (createOffer, createAnswer, iceCandidates, ...), but then the incoming remoteStream has 2 tracks, the audioTrack which is working and the videoTrack which is not working with readyState: "muted".
if i do createOffer on pageload and then with start call do crreateOffer again with the same peerConnection, also the video displays correctly (but then i'll get in firefox the "Cannot create offer in state have-local-offer".
any ideas what could be the problem? (code is quite too complex for showing here)
Can you the local video on both the sides?
-> In a pc only one browser will get access to camera at any time either chrome/firefox)
-> Try calling between two different machines or chrome-to-chrome or firefox-to-firefox.
"Cannot create offer in state have-local-offer"
It mean you already created an offer and trying to create again without setting remote answer.
Calling createOffer again is not good idea. Make sure you create the offer in the following way(synchronously).
After receiving the stream gUM callback, then add it peerConnection.
After adding the stream then create the offer, in case of answer set remote offer also before creating answer.
I was having this issue while preparing the MediaStream on the iOS app. It turns out the the I wasn't passing the correct RTCMediaConstraints.
The problem resolved after I switch and use [RTCMediaConstraints defaultConstraints].
For example:
- (RTCVideoTrack *)createLocalVideoTrack {
RTCVideoTrack* localVideoTrack = nil;
RTCMediaConstraints *mediaConstraints = [RTCMediaConstraints defaultConstraints];
RTCAVFoundationVideoSource *source =
[[self peerConnectionFactory] avFoundationVideoSourceWithConstraints:mediaConstraints];
localVideoTrack =
[[self peerConnectionFactory] videoTrackWithSource:source
trackId:kARDVideoTrackId];
return localVideoTrack;
}
Related
Here is my code :
const videoElem = document.getElementById("videoScan");
var stream = document.getElementById("videoScan").srcObject;
var tracks = stream.getTracks();
tracks.forEach(function (track) {
track.stop();
});
videoElem.srcObject = null;
I am trying to stop all camera streams in order to make ZXing work. However, I have the following error (only on android tablet) :
Problem is, I can't solve it, but everything is working well. And when I do a try/catch, code stops working. I can't figure out why this error is displaying.
When you say "everything is working well", you mean that you get what you want as output?
If the code throwing the error is the same you are showing us, then it means that streams contains null.
Since streams is a const value (thus once only) assigned on the previous instructions, it means that document.getElementById("videoScan").srcObject is null too.
I'd suggest you to do some other debug by, i.e., printing out to JS console (if you have access to it from your Android debug environment, I'm sure there's a way) what's inside stream, before trying to access its getTracks method reference.
Please do also check the compatibility with srcObject with your Android (browser?) environment: MDN has a compatibility list you can inspect.
The simplest way to catch this null reference error, is to check if tracks actually exists.
const videoElem = document.getElementById("videoScan");
const stream = videoElem.srcObject;
if (stream && stream?.getTracks()) {
const tracks = stream.getTracks();
tracks.forEach(function (track) {
track.stop();
});
} else {
console.warn('Stream object is not available');
}
videoElem.srcObject = null;
So I found the solution. The error stopped the code from continuing, which, in my case, actually made it work. I made a very basic mistake : not paying enough attention to the debug results. However, if anyone has the error "Uncaught (in promise) DOMException: Could not start video source" with ZXing on Android, you simply have to cut all video streams to allow chrome starting an other camera. It will not work otherwise.
My team is adapting the sipml5 library to create a html5 softphone for use in our organization. The full repository is here: https://github.com/L1kMakes/sipml5-ng . We have the code working well; audio and video calls work flawlessly. In the original code we forked from (which was from like 2012) screen sharing was accomplished with a browser plugin, but HTML 5 and WebRTC have changed to allow this to be done with just JavaScript now.
I am having difficulty adapting the code to accommodate this. I am starting with the code here on line 828: https://github.com/L1kMakes/sipml5-ng/blob/master/src/tinyMEDIA/src/tmedia_session_jsep.js This works, though without audio. That makes sense as the only possible audio stream from a screen share is the screen audio, not the mic audio. I am attempting to initialize an audio stream from getUserMedia, grab a video stream from getDisplayMedia, and present that to the client as a single mediaStream. Here's my adapted code:
if ( this.e_type == tmedia_type_e.SCREEN_SHARE ) {
// Plugin-less screen share using WebRTC requires "getDisplayMedia" instead of "getUserMedia"
// Because of this, audio constraints become limited, and we have to use async to deal with
// the promise variable for the mediastream. This is a change since Chrome 71. We are able
// to use the .then aspect of the promise to call a second mediaStream, then attach the audio
// from that to the video of our second screenshare mediaStream, enabling plugin-less screen
// sharing with audio.
let o_stream = null;
let o_streamAudio = null;
let o_streamVideo = null;
let o_streamAudioTrack = null;
let o_streamVideoTrack = null;
try {
navigator.mediaDevices.getDisplayMedia(
{
audio: false,
video: !!( this.e_type.i_id & tmedia_type_e.VIDEO.i_id ) ? o_video_constraints : false
}
).then(o_streamVideo => {
o_streamVideoTrack = o_streamVideo.getVideoTracks()[0];
navigator.mediaDevices.getUserMedia(
{
audio: o_audio_constraints,
video: false
}
).then(o_streamAudio => {
o_streamAudioTrack = o_streamAudio.getAudioTracks()[0];
o_stream = new MediaStream( [ o_streamVideoTrack , o_streamAudioTrack ] );
tmedia_session_jsep01.onGetUserMediaSuccess(o_stream, This);
});
});
} catch ( s_error ) {
tmedia_session_jsep01.onGetUserMediaError(s_error, This);
}
} else {
try {
navigator.mediaDevices.getUserMedia(
{
audio: (this.e_type == tmedia_type_e.SCREEN_SHARE) ? false : !!(this.e_type.i_id & tmedia_type_e.AUDIO.i_id) ? o_audio_constraints : false,
video: !!(this.e_type.i_id & tmedia_type_e.VIDEO.i_id) ? o_video_constraints : false // "SCREEN_SHARE" contains "VIDEO" flag -> (VIDEO & SCREEN_SHARE) = VIDEO
}
).then(o_stream => {
tmedia_session_jsep01.onGetUserMediaSuccess(o_stream, This);
});
} catch (s_error ) {
tmedia_session_jsep01.onGetUserMediaError(s_error, This);
}
}
My understanding is, o_stream should represent the resolved mediaStream tracks, not a promise, when doing a screen share. On the other end, we are using the client "MicroSIP." When making a video call, when the call is placed, I get my video preview locally in our web app, then when the call is answered the MicroSIP client gets a green square for a second, then resolves to my video. When I make a screen share call, my local web app sees the local preview of the screen share, but upon answering the call, my MicroSIP client just gets a green square and never gets the actual screen share.
The video constraints for both are the same. If I add debugging output to get more descriptive of what is actually in the media streams, they appear identical as far as I can tell. I made a test video call and a test screen share call, captured debug logs from each and held them side by side in notepad++...all appears to be identical save for the explicit debug describing the traversal down the permission request tree with "GetUserMedia" and "GetDisplayMedia." I can't really post the debug logs here as cleaning them up of information from my organization would leave them pretty barren. Save for the extra debug output on the "getDisplayMedia" call before "getUserMedia", timestamps, and uniqueID's related to individual calls, the log files are identical.
I am wondering if the media streams are not resolving from their promises before the "then" is completed, but asynchronous javascript and promises is still a bit over my head. I do not believe I should convert this function to async, but I have nothing else to debug here; the mediaStream is working as I can see it locally, but I'm stumped on figuring out what is going on with the remote send.
The solution was...nothing, the code was fine. It turns out the recipient SIP client we were using had an issue where it just aborts if it gets video larger than 640x480.
I'm trying to understand an aspect of JavaScript Web Audio API playing samples.
More specifically, I'm using Web Audio API to play a very short sample (actually a "click" sample, for a metronome).
This is what I do:
let sample = this.setupSample(this.audioContext, click1);
sample.start(time);
sample.stop(time + this.noteLength);
This piece of code is working if I run the app from desktop (my pc), but when I try to run the app from mobile, as soon as I click the "start" button to play the metronome, I get the following error:
InvalidStateError: The object is in an invalid state.
93| sample.stop(time + this.noteLength);
The first thing I tried was commenting that line:
let sample = this.setupSample(this.audioContext, click1);
sample.start(time);
// sample.stop(time + this.noteLength);
In this way, the app runs correctly both from desktop and mobile.
So, it seems to me that calling stop() on a sample that has already ended is not necessary, maybe even wrong.
Hence my question: is it wrong to call stop() on a sample already ended? Should I consider like it's implicitly called if the sample has stopped (ended) by itself? Plus, why I don't get that InvalidStateError error from desktop?
Thank you.
EDIT: to be more clear, here's what setupSample does:
setupSample = (audioContext, samplePath) => {
const source = audioContext.createBufferSource();
const myRequest = new Request(samplePath);
fetch(myRequest).then(function(response) {
return response.arrayBuffer();
}).then(function(buffer) {
audioContext.decodeAudioData(buffer, function(decodedData) {
source.buffer = decodedData;
source.connect(audioContext.destination);
})
})
return source;
}
I know how to capture webpage, but I am asking to how capture desktop or another application in the desktop ? And if there is anyway to highlight parts of screen. Like how html2canvas does for webpages, can we do something for desktop applications using a browser app in HTML/JS ?
Yes, it is possible!But as far as I know only for Firefox and Chrome (I used Chrome). Thanks to Screen Capturing and WebRTC. More info about WebRTC
I used a library called RTCMultiConnection which is very easy to use, but you should be able to do that also without any use of a library.
Here, just to give you a startingpoint:
// 1. Create the connection Objekt
var connection = new RTCMultiConnection();
// 2. Activate screen, which is the whole monitor, not only the browser window!
connection.session = {
screen: true,
data: false,
oneway: true
};
// 3. Create the callback for the stream
connection.onstream = function(event) {
// Make something with the event
// event.stream contains the stream, event.mediaElement the media
// I used event.mediaElement as parameter to draw the frage into an canvas; via context2d.drawImage(event.mediaElement, ...)
// Then I create an base64 String via canvas.toDataURL("image/png") and
// Don't forget to stop the stream if you just want to have one single image
};
// 4. Start Desktop Sharing
connection.open({
// you could register a onMediaCaptured callback here
});
I want to change from a audio/video stream to a "screensharing" stream:
peerConnection.removeStream(streamA) // __o_j_sep... in Screenshots below
peerConnection.addStream(streamB) // SSTREAM in Screenshots below
streamA is a video/audio stream coming from my camera and microphone.
streamB is the screencapture I get from my extension.
They are both MediaStream objects that look like this:
* 1 Remark
But if I remove streamA from the peerConnection and addStream(streamB) like above nothing seems to happen.
The following works as expected (the stream on both ends is removed and re-added)
peerConnection.removeStream(streamA) // __o_j_sep...
peerConnection.addStream(streamA) // __o_j_sep...
More Details
I have found this example which does "the reverse" (Switch from screen capture to audio/video with camera) but can't spot a significant difference.
The peerConnection RTCPeerConnection object is actually created by this SIPML library source code available here. And I access it like this:
var peerConnection = stack.o_stack.o_layer_dialog.ao_dialogs[1].o_msession_mgr.ao_sessions[0].o_pc
(Yes, this does not look right, but there is no official way to get access to the Peer Connection see discussion here) and here.
Originally I tried to just (ex)change the videoTracks of streamA with the videoTrack of streamB. See question here. It was suggested to me that I should try to renegotiate the Peer Connection (by removing/adding Streams to it), because the addTrack does not trigger a re-negotitation.
I've also asked for help here but the maintainer seems very busy and didn't have a chance to respond yet.
* 1 Remark: Why does streamB not have a videoTracks property? The stream plays in an HTML <video> element and seems to "work". Here is how I get it:
navigator.webkitGetUserMedia({
audio: false,
video: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: streamId,
maxWidth: window.screen.width,
maxHeight: window.screen.height
//, maxFrameRate: 3
}
}
// success callback
}, function(localMediaStream) {
SSTREAM = localMediaStream; //streamB
// fail callback
}, function(error) {
console.log(error);
});
it also seems to have a videoTrack:
I'm running:
OS X 10.9.3
Chrome Version 35.0.1916.153
To answer your first question, when modifying the MediaStream in an active peerconnection, the peerconnection object will fire an onnegotiationneeded event. You need to handle that event and re-exchange your SDPs. The main reason behind this is so that both parties know what streams are being sent between them. When the SDPs are exchanged, the mediaStream ID is included, and if there is a new stream with a new ID(event with all other things being equal), a re-negotiation must take place.
For you second question(about SSTREAM). It does indeed contain video tracks but there is no videotrack attribute for webkitMediaStreams. You can grab tracks via their ID, however.
Since there is the possibility of having numerous tracks for each media type, there is no single attribute for a videotrack or audiotrack but instead an array of such. The .getVideoTracks() call returns an array of the current videoTracks. So, you COULD grab a particular video track through indicating its index .getVideoTracks()[0].
I do something similar, on clicking a button I remove the active stream and add the other.
This is the way I do it and it works for me perfectly,
_this.rtc.localstream.stop();
_this.rtc.pc.removeStream(_this.rtc.localstream);
gotStream = function (localstream_aud){
var constraints_audio={
audio:true
}
_this.rtc.localstream_aud = localstream_aud;
_this.rtc.mediaConstraints= constraints_audio;
_this.rtc.createOffer();
}
getUserMedia(constraints_audio, gotStream);
gotStream = function (localstream){
var constraints_screen={
audio:false,
video:{
mandatory:{
chromeMediaSource: 'screen'
}
}
}
_this.rtc.localstream = localstream;
_this.rtc.mediaConstraints=constraints_video;
_this.rtc.createStream();
_this.rtc.createOffer();
}
getUserMedia(constraints_video, gotStream);
Chrome doesn't allow audio along with the 'screen' so I create a separate stream for it.
You will need to do the opposite in order to switch back to your older video stream or actually to any other stream you want.
Hope this helps