According to https://code.google.com/p/chromium/issues/detail?id=223639 chromium has issues with audio Loopback. and it never works in chrome app. Can anyone share some links and explanation to why is this not working?Or if it is possible? I tried below code but lot of disturbance in desktop audio.
video: {
mandatory: {
chromeMediaSource:'screen',
chromeMediaSourceId: id
}
},
audio: {
mandatory: {
chromeMediaSource: 'system',
chromeMediaSourceId: id,
}
}
Multiple streams are captured and attached to a single peer connection?
Thanks!
AFAIK only desktopCapture API are supporting audio+tab.
chromeMediaSource value must be desktop.
video: {
mandatory: {
chromeMediaSource:'desktop',
chromeMediaSourceId: 'sourceIdCapturedUsingChromeExtension'
}
},
audio: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: 'sourceIdCapturedUsingChromeExtension'
}
}
Try following demo on Chrome-Canary:
https://rtcmulticonnection.herokuapp.com/demos/Audio+ScreenSharing.html
However make sure to enable chrome://flags#tab-for-desktop-share flag.
Related
We are developing a desktop application using Electron with screenshare capability. For this we use the getUserMedia API. And we have the option to choose which screen or window to capture. This is part of the code for that
let constraints = {
audio: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: sourceId
}
},
video: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: sourceId
}
}
}
let stream = await navigator.mediaDevices.getUserMedia(constraints)
And we would like to only capture audio from the application that is being streamed. Is it possible to do this? Maybe some third party solutions?
I'm developing an application in Angular to capture a photo with the maximum possible quality depending on the device's camera.
Currently, I have this code:
HTML:
<video #video id="video" autoplay muted playsinline></video>
Angular TS:
_requestCameraStream(width: number, height: number, secondTry: boolean) {
if (navigator.mediaDevices.getUserMedia)
{
navigator.mediaDevices
.getUserMedia({
video: {
facingMode: 'environment',
width: { ideal: width },
height: { ideal: height },
frameRate: { ideal: 30 },
},
})
.then(stream => {
console.log('_getUserMedia -> stream loaded');
this.loadStream(stream);
})
.catch(err => {
console.log(err);
if (!secondTry)
{
console.log('Started second try');
this._requestCameraStream(2560, 1440, true);
}
else
{
this.router.navigateByUrl('id-document/camera-error');
}
});
}
}
private loadStream(stream: MediaStream) {
const videoDevices = stream.getVideoTracks();
this.lastStream = stream;
this.video!.nativeElement.srcObject = stream;
this.video!.nativeElement.play();
this.ref.detectChanges();
}
Basically I check if the device has a camera available and try to load it with the width and height values that the function receives. On the first try I call the function as follows:
this._requestCameraStream(4096, 2160, false);
If the stream fails to load (probably because the camera does not support 4k quality) then it tries again with the values this._requestCameraStream(2560, 1440, true);
This is actually working pretty well on most devices, but on a Galaxy Note 10 Plus, the stream does not load, but if I click the button to take the picture, the camera does capture the image in 4k quality.
I suspect that the camera has a higher resolution than the screen, so the camera can capture a 4k image, but the screen can't load a 4k video as a preview. The problem is: the system does not trigger any warning or errors that I could capture. It is as if the preview loaded successfully.
How can I detect and treat this error? Or maybe, is there any other way that I can request the camera to capture a maximum quality image with the preview loading correctly?
You can try defining a range of resolutions instead of trying only two
async _requestCameraStream() {
if (!navigator.mediaDevices.getUserMedia) return;
try {
const stream = await navigator.mediaDevices
.getUserMedia({
video: {
facingMode: {
exact: "environment"
},
width: { min: 2288, ideal: 4096, max: 4096 },
height: { min: 1080, ideal: 2160, max: 2160 },
frameRate: { ideal: 30 },
},
});
if(stream) {
console.log('_getUserMedia -> stream loaded');
this.loadStream(stream);
}
}catch (err) {
console.log(err);
this.router.navigateByUrl('id-document/camera-error');
}
}
I think your current approach is correct to capture the maximum quality image. I have used a similar approach in one of my projects. I think the problem is with the video playback. There is an autoplay policy in the browser it behaves differently in a different browser. This browser policy does not allow to play video content without any user intervention. Check this URL, https://developer.chrome.com/blog/autoplay/
I think you should add a muted attribute in the video element and would be better if you ask the user to click before capturing the camera stream. May be this does not solve the problem which you are facing but this problem will be there in some browsers like iPhone. Apple devices do not allow any video content without user intervention.
Regards,
omi
I've tested https://webrtc.github.io/samples/src/content/getusermedia/record/, however it doesn't appear to function on iPhone, in Safari or Chrome, presenting getUserMedia errors.
The errors occur by default, I haven't changed settings, never tried this before, and it never prompts for camera access.
Clicking Start Camera:
iPhone Safari: navigator.getUserMedia error:Error: Invalid constraint
iPhone Chrome: navigator.getUserMedia error:TypeError: undefined is not an object (evaluating 'navigator.mediaDevices.getUserMedia')
Any ideas?
It sais the constraint is invalid.
This is the constraint they are using:
const constraints = {
audio: {
echoCancellation: {exact: hasEchoCancellation}
},
video: {
width: 1280, height: 720
}
};
could be echoCancellation is not supported on the safari. So maybe change the code to just
{
audio: true,
video: {
width: 1280, height: 720
}
}
I am unable to get desktop picker dialog for available sources. I am newbie can someone guide me what am I missing? In chrome we use "chrome.desktopCapture.chooseDesktopMedia"? I obtained source from below code.
function onAccessApproved(error, sources) {
if (error) throw error;
for (var i = 0; i < sources.length; ++i) {
{
navigator.webkitGetUserMedia({
audio: false,
video: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: sources[i].id,
minWidth: 1280,
maxWidth: 1280,
minHeight: 720,
maxHeight: 720
}
}
}, gotShareStream, errorCallback);
return;
}
I have tried Option link but I am getting BrowserWindow undefined error.
Thanks!
I haven't used electron, but in WebRTC you need to use something like this video: {optional: [{sourceId: source.id}]}. And don't do this for all the sources - do this only to get a stream from that source.
To get the available sources use navigator.mediaDevices.enumerateDevices() and then filter them by kind which can be audioinput, audiooutput, videoinput and videooutput.
I'm learning how YouTube player works with Javascript. I have this code because I want to make a video gallery and it works perfectly in any browser (the idea is to switch between videos by the ID), the problem is when I try to test in on my ipad it doesn't display anything. Any suggestions for iOS?
Thanks in advance!
swfobject.embedSWF("http://www.youtube.com/v/3sL8aaMw7ZQ?enablejsapi=1&rel=0&fs=1",
"ytplayer-temp",
"400",
"226",
"10.1",
false,
false,
{ allowScriptAccess: "always", allowFullScreen: "true" },
{ id: "ytplayer" }
);
function ytplayer_loadvideo(id) {
var o = document.getElementById("ytplayer");
if (o) {
o.loadVideoById(id);
}
}
No flash support for iOS. You have to use html5 instead.
https://developers.google.com/youtube/getting_started