I'm rather new to the whole WebRTC thing, and I've been reading a ton of articles and about different APIs about how to handle video recording. It seems the more I read the more confusing the whole thing is. I know I can use solutions such as Nimbb, but I'd rather keep everything "in house", so to speak.
The way I've got the code right now, the page loads and the user clicks a button to determine the type of input (text or video). When the video button is clicked, the webcam is initialized and turned on to record. However, the stream from the webcam doesn't show up in the page itself. It seems this is because the src of the video is actually an object. The weird thing is that when I try to get more info about the object by logging to console, I only get an object attribute called currentTime. How does this object create the actual source for the video element? I've tried many different variations of the code below to no avail, so I'm just wondering what I'm doing wrong.
var playerId = 'cam-'+t+'-'+click[1]+'-'+click[2];
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia;
if(navigator.getUserMedia){
function onSuccess(stream){
var video = document.getElementById(playerId);
var vidSource;
if(window.webkitURL || window.URL){
vidSource = (window.webkitURL) ? window.webkitURL.createObjectURL(stream) : window.URL.createObjectURL(stream);
}else{
vidSource = stream;
}
video.autoplay = true;
video.src = vidSource;
}
function onError(e){
console.error('Error: ', e);
}
navigator.getUserMedia({video: true, audio: true}, onSuccess, onError);
}else{
//flash alternative
}
The webkit check actually was the problem as pointed out by mido22 in the comments
Related
My team is adapting the sipml5 library to create a html5 softphone for use in our organization. The full repository is here: https://github.com/L1kMakes/sipml5-ng . We have the code working well; audio and video calls work flawlessly. In the original code we forked from (which was from like 2012) screen sharing was accomplished with a browser plugin, but HTML 5 and WebRTC have changed to allow this to be done with just JavaScript now.
I am having difficulty adapting the code to accommodate this. I am starting with the code here on line 828: https://github.com/L1kMakes/sipml5-ng/blob/master/src/tinyMEDIA/src/tmedia_session_jsep.js This works, though without audio. That makes sense as the only possible audio stream from a screen share is the screen audio, not the mic audio. I am attempting to initialize an audio stream from getUserMedia, grab a video stream from getDisplayMedia, and present that to the client as a single mediaStream. Here's my adapted code:
if ( this.e_type == tmedia_type_e.SCREEN_SHARE ) {
// Plugin-less screen share using WebRTC requires "getDisplayMedia" instead of "getUserMedia"
// Because of this, audio constraints become limited, and we have to use async to deal with
// the promise variable for the mediastream. This is a change since Chrome 71. We are able
// to use the .then aspect of the promise to call a second mediaStream, then attach the audio
// from that to the video of our second screenshare mediaStream, enabling plugin-less screen
// sharing with audio.
let o_stream = null;
let o_streamAudio = null;
let o_streamVideo = null;
let o_streamAudioTrack = null;
let o_streamVideoTrack = null;
try {
navigator.mediaDevices.getDisplayMedia(
{
audio: false,
video: !!( this.e_type.i_id & tmedia_type_e.VIDEO.i_id ) ? o_video_constraints : false
}
).then(o_streamVideo => {
o_streamVideoTrack = o_streamVideo.getVideoTracks()[0];
navigator.mediaDevices.getUserMedia(
{
audio: o_audio_constraints,
video: false
}
).then(o_streamAudio => {
o_streamAudioTrack = o_streamAudio.getAudioTracks()[0];
o_stream = new MediaStream( [ o_streamVideoTrack , o_streamAudioTrack ] );
tmedia_session_jsep01.onGetUserMediaSuccess(o_stream, This);
});
});
} catch ( s_error ) {
tmedia_session_jsep01.onGetUserMediaError(s_error, This);
}
} else {
try {
navigator.mediaDevices.getUserMedia(
{
audio: (this.e_type == tmedia_type_e.SCREEN_SHARE) ? false : !!(this.e_type.i_id & tmedia_type_e.AUDIO.i_id) ? o_audio_constraints : false,
video: !!(this.e_type.i_id & tmedia_type_e.VIDEO.i_id) ? o_video_constraints : false // "SCREEN_SHARE" contains "VIDEO" flag -> (VIDEO & SCREEN_SHARE) = VIDEO
}
).then(o_stream => {
tmedia_session_jsep01.onGetUserMediaSuccess(o_stream, This);
});
} catch (s_error ) {
tmedia_session_jsep01.onGetUserMediaError(s_error, This);
}
}
My understanding is, o_stream should represent the resolved mediaStream tracks, not a promise, when doing a screen share. On the other end, we are using the client "MicroSIP." When making a video call, when the call is placed, I get my video preview locally in our web app, then when the call is answered the MicroSIP client gets a green square for a second, then resolves to my video. When I make a screen share call, my local web app sees the local preview of the screen share, but upon answering the call, my MicroSIP client just gets a green square and never gets the actual screen share.
The video constraints for both are the same. If I add debugging output to get more descriptive of what is actually in the media streams, they appear identical as far as I can tell. I made a test video call and a test screen share call, captured debug logs from each and held them side by side in notepad++...all appears to be identical save for the explicit debug describing the traversal down the permission request tree with "GetUserMedia" and "GetDisplayMedia." I can't really post the debug logs here as cleaning them up of information from my organization would leave them pretty barren. Save for the extra debug output on the "getDisplayMedia" call before "getUserMedia", timestamps, and uniqueID's related to individual calls, the log files are identical.
I am wondering if the media streams are not resolving from their promises before the "then" is completed, but asynchronous javascript and promises is still a bit over my head. I do not believe I should convert this function to async, but I have nothing else to debug here; the mediaStream is working as I can see it locally, but I'm stumped on figuring out what is going on with the remote send.
The solution was...nothing, the code was fine. It turns out the recipient SIP client we were using had an issue where it just aborts if it gets video larger than 640x480.
I'll describe my problem briefly. I made a page that access the webcam to shot a picture and then upload it to my server. When I access the page on my localhost, it works perfectly, the problem occurs when I try to access from another device or I access with the IP.. For example: http://localost/Project/Page works well, but http://192.168.0.5/Project/Page doesn't work.
This is the code I used to access te media. The error occurs in the else sentence and throws the alert
navigator.getUserMedia ||
(navigator.getUserMedia = navigator.mozGetUserMedia ||
navigator.webkitGetUserMedia || navigator.msGetUserMedia);
if (navigator.getUserMedia) {
navigator.getUserMedia({ video: true, audio: false }, onSuccess, onError);
} else {
alert('your browser doesn't spport this function');
}
I don't know if the code isn't working or if there is a security policy making my page crash.
Regards
Found a Solution. I had to add a " security exception" to the browser, and it worked. Maybe not the best practice but the cheaper one when you dont have a SSL certificate.
I want to change from a audio/video stream to a "screensharing" stream:
peerConnection.removeStream(streamA) // __o_j_sep... in Screenshots below
peerConnection.addStream(streamB) // SSTREAM in Screenshots below
streamA is a video/audio stream coming from my camera and microphone.
streamB is the screencapture I get from my extension.
They are both MediaStream objects that look like this:
* 1 Remark
But if I remove streamA from the peerConnection and addStream(streamB) like above nothing seems to happen.
The following works as expected (the stream on both ends is removed and re-added)
peerConnection.removeStream(streamA) // __o_j_sep...
peerConnection.addStream(streamA) // __o_j_sep...
More Details
I have found this example which does "the reverse" (Switch from screen capture to audio/video with camera) but can't spot a significant difference.
The peerConnection RTCPeerConnection object is actually created by this SIPML library source code available here. And I access it like this:
var peerConnection = stack.o_stack.o_layer_dialog.ao_dialogs[1].o_msession_mgr.ao_sessions[0].o_pc
(Yes, this does not look right, but there is no official way to get access to the Peer Connection see discussion here) and here.
Originally I tried to just (ex)change the videoTracks of streamA with the videoTrack of streamB. See question here. It was suggested to me that I should try to renegotiate the Peer Connection (by removing/adding Streams to it), because the addTrack does not trigger a re-negotitation.
I've also asked for help here but the maintainer seems very busy and didn't have a chance to respond yet.
* 1 Remark: Why does streamB not have a videoTracks property? The stream plays in an HTML <video> element and seems to "work". Here is how I get it:
navigator.webkitGetUserMedia({
audio: false,
video: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: streamId,
maxWidth: window.screen.width,
maxHeight: window.screen.height
//, maxFrameRate: 3
}
}
// success callback
}, function(localMediaStream) {
SSTREAM = localMediaStream; //streamB
// fail callback
}, function(error) {
console.log(error);
});
it also seems to have a videoTrack:
I'm running:
OS X 10.9.3
Chrome Version 35.0.1916.153
To answer your first question, when modifying the MediaStream in an active peerconnection, the peerconnection object will fire an onnegotiationneeded event. You need to handle that event and re-exchange your SDPs. The main reason behind this is so that both parties know what streams are being sent between them. When the SDPs are exchanged, the mediaStream ID is included, and if there is a new stream with a new ID(event with all other things being equal), a re-negotiation must take place.
For you second question(about SSTREAM). It does indeed contain video tracks but there is no videotrack attribute for webkitMediaStreams. You can grab tracks via their ID, however.
Since there is the possibility of having numerous tracks for each media type, there is no single attribute for a videotrack or audiotrack but instead an array of such. The .getVideoTracks() call returns an array of the current videoTracks. So, you COULD grab a particular video track through indicating its index .getVideoTracks()[0].
I do something similar, on clicking a button I remove the active stream and add the other.
This is the way I do it and it works for me perfectly,
_this.rtc.localstream.stop();
_this.rtc.pc.removeStream(_this.rtc.localstream);
gotStream = function (localstream_aud){
var constraints_audio={
audio:true
}
_this.rtc.localstream_aud = localstream_aud;
_this.rtc.mediaConstraints= constraints_audio;
_this.rtc.createOffer();
}
getUserMedia(constraints_audio, gotStream);
gotStream = function (localstream){
var constraints_screen={
audio:false,
video:{
mandatory:{
chromeMediaSource: 'screen'
}
}
}
_this.rtc.localstream = localstream;
_this.rtc.mediaConstraints=constraints_video;
_this.rtc.createStream();
_this.rtc.createOffer();
}
getUserMedia(constraints_video, gotStream);
Chrome doesn't allow audio along with the 'screen' so I create a separate stream for it.
You will need to do the opposite in order to switch back to your older video stream or actually to any other stream you want.
Hope this helps
I'm trying to record audio via a microphone with the latest Chrome beta (Version 21.0.1180.15). It seems that almost everything to do it is implemented in Chrome beta now. I even get access to the microphone. Though I can't connect the stream with an audio element. But to my understanding it should work if there is no bug.
createMediaStreamSource() is not yet implemented. As a work around I want to use createMediaElementSource() to route the audio from the microphone through a muted audio element.
Using the code below I get one of these two error message in the console:
GET blob:file%3A///625fd498-f427-43d5-959b-3b49c6d53ab5 404 (Not
Found)
or
Not allowed to load local resource:
blob:null/8df582cc-b663-489b-bf49-1785226fc7b7
The error is caused by this line:
audio.src = window.webkitURL.createObjectURL(stream)
Is there something wrong with this line? How to connect the stream to the audio element source? Or is it a bug in Chrome that makes it impossible to create an object URL?
Code:
var context = null;
var elementSource = null;
function onError(e) {
if (e.code == 1) {
alert('User denied access to their camera');
} else {
alert('getUserMedia() not supported by your browser');
}
}
window.addEventListener('load', initAudio, false);
function initAudio() {
navigator.webkitGetUserMedia({audio:true}, function (stream) {
var audio = document.querySelector('#basic-stream');
audio.src = window.webkitURL.createObjectURL(stream);
audio.controls = true;
context = new webkitAudioContext();
elementSource = context.createMediaElementSource(audio);
elementSource.connect(context.destination);
}, onError);
}
<div>
audio id="basic-stream" class="audiostream" autoplay muted></audio>
</div>
If it isn't absolutely necessary, please don't re-invent the square wheel: https://github.com/mattdiamond/Recorderjs
I'm not sure if this is related, but there is an outstanding issue regarding getUserMedia() with audio.
http://code.google.com/p/chromium/issues/detail?id=112367
I had searched a lot of DEMO and examples about getUserMedia , but most are just camera capturing, not microphone.
So I downloaded some examples and tried on my own computer , camera capturing is work ,
But when I changed
navigator.webkitGetUserMedia({video : true},gotStream);
to
navigator.webkitGetUserMedia({audio : true},gotStream);
The browser ask me to allow microphone access first, and then it failed at
document.getElementById("audio").src = window.webkitURL.createObjectURL(stream);
The message is :
GET blob:http%3A//localhost/a5077b7e-097a-4281-b444-8c1d3e327eb4 404 (Not Found)
This is my code: getUserMedia_simple_audio_test
Did I do something wrong? Or only getUserMedia can work for camera now ?
It is currently not available in Google Chrome. See Issue 112367.
You can see in the demo, it will always throw an error saying
GET blob:http%3A//whatever.it.is/b0058260-9579-419b-b409-18024ef7c6da 404 (Not Found)
And also you can't listen to the microphone either in
{
video: true,
audio: true
}
It is currently supported in Chrome Canary. You need to type about:flags into the address bar then enable Web Audio Input.
The following code connects the audio input to the speakers. WATCH OUT FOR THE FEEDBACK!
<script>
// this is to store a reference to the input so we can kill it later
var liveSource;
// creates an audiocontext and hooks up the audio input
function connectAudioInToSpeakers(){
var context = new webkitAudioContext();
navigator.webkitGetUserMedia({audio: true}, function(stream) {
console.log("Connected live audio input");
liveSource = context.createMediaStreamSource(stream);
liveSource.connect(context.destination);
});
}
// disconnects the audio input
function makeItStop(){
console.log("killing audio!");
liveSource.disconnect();
}
// run this when the page loads
connectAudioInToSpeakers();
</script>
<input type="button" value="please make it stop!" onclick="makeItStop()"/>
(sorry, I forgot to login, so posting with my proper username...)
It is currently supported in Chrome Canary. You need to type about:flags into the address bar then enable Web Audio Input.
The following code connects the audio input to the speakers. WATCH OUT FOR THE FEEDBACK!
http://jsfiddle.net/2mLtM/
<script>
// this is to store a reference to the input so we can kill it later
var liveSource;
// creates an audiocontext and hooks up the audio input
function connectAudioInToSpeakers(){
var context = new webkitAudioContext();
navigator.webkitGetUserMedia({audio: true}, function(stream) {
console.log("Connected live audio input");
liveSource = context.createMediaStreamSource(stream);
liveSource.connect(context.destination);
});
}
// disconnects the audio input
function makeItStop(){
console.log("killing audio!");
liveSource.disconnect();
}
// run this when the page loads
connectAudioInToSpeakers();
</script>
<input type="button" value="please make it stop!" onclick="makeItStop()"/>
It's working, you just need to add toString parameter after audio : true
Check this article - link