Unable to obtain desktop picker dialog #electron - javascript

I am unable to get desktop picker dialog for available sources. I am newbie can someone guide me what am I missing? In chrome we use "chrome.desktopCapture.chooseDesktopMedia"? I obtained source from below code.
function onAccessApproved(error, sources) {
if (error) throw error;
for (var i = 0; i < sources.length; ++i) {
{
navigator.webkitGetUserMedia({
audio: false,
video: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: sources[i].id,
minWidth: 1280,
maxWidth: 1280,
minHeight: 720,
maxHeight: 720
}
}
}, gotShareStream, errorCallback);
return;
}
I have tried Option link but I am getting BrowserWindow undefined error.
Thanks!

I haven't used electron, but in WebRTC you need to use something like this video: {optional: [{sourceId: source.id}]}. And don't do this for all the sources - do this only to get a stream from that source.
To get the available sources use navigator.mediaDevices.enumerateDevices() and then filter them by kind which can be audioinput, audiooutput, videoinput and videooutput.

Related

Electron: how to capture audio from only one app

We are developing a desktop application using Electron with screenshare capability. For this we use the getUserMedia API. And we have the option to choose which screen or window to capture. This is part of the code for that
let constraints = {
audio: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: sourceId
}
},
video: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: sourceId
}
}
}
let stream = await navigator.mediaDevices.getUserMedia(constraints)
And we would like to only capture audio from the application that is being streamed. Is it possible to do this? Maybe some third party solutions?

Camera stream not loading on all devices with maximum quality requirements (HTML/Angular)

I'm developing an application in Angular to capture a photo with the maximum possible quality depending on the device's camera.
Currently, I have this code:
HTML:
<video #video id="video" autoplay muted playsinline></video>
Angular TS:
_requestCameraStream(width: number, height: number, secondTry: boolean) {
if (navigator.mediaDevices.getUserMedia)
{
navigator.mediaDevices
.getUserMedia({
video: {
facingMode: 'environment',
width: { ideal: width },
height: { ideal: height },
frameRate: { ideal: 30 },
},
})
.then(stream => {
console.log('_getUserMedia -> stream loaded');
this.loadStream(stream);
})
.catch(err => {
console.log(err);
if (!secondTry)
{
console.log('Started second try');
this._requestCameraStream(2560, 1440, true);
}
else
{
this.router.navigateByUrl('id-document/camera-error');
}
});
}
}
private loadStream(stream: MediaStream) {
const videoDevices = stream.getVideoTracks();
this.lastStream = stream;
this.video!.nativeElement.srcObject = stream;
this.video!.nativeElement.play();
this.ref.detectChanges();
}
Basically I check if the device has a camera available and try to load it with the width and height values that the function receives. On the first try I call the function as follows:
this._requestCameraStream(4096, 2160, false);
If the stream fails to load (probably because the camera does not support 4k quality) then it tries again with the values this._requestCameraStream(2560, 1440, true);
This is actually working pretty well on most devices, but on a Galaxy Note 10 Plus, the stream does not load, but if I click the button to take the picture, the camera does capture the image in 4k quality.
I suspect that the camera has a higher resolution than the screen, so the camera can capture a 4k image, but the screen can't load a 4k video as a preview. The problem is: the system does not trigger any warning or errors that I could capture. It is as if the preview loaded successfully.
How can I detect and treat this error? Or maybe, is there any other way that I can request the camera to capture a maximum quality image with the preview loading correctly?
You can try defining a range of resolutions instead of trying only two
async _requestCameraStream() {
if (!navigator.mediaDevices.getUserMedia) return;
try {
const stream = await navigator.mediaDevices
.getUserMedia({
video: {
facingMode: {
exact: "environment"
},
width: { min: 2288, ideal: 4096, max: 4096 },
height: { min: 1080, ideal: 2160, max: 2160 },
frameRate: { ideal: 30 },
},
});
if(stream) {
console.log('_getUserMedia -> stream loaded');
this.loadStream(stream);
}
}catch (err) {
console.log(err);
this.router.navigateByUrl('id-document/camera-error');
}
}
I think your current approach is correct to capture the maximum quality image. I have used a similar approach in one of my projects. I think the problem is with the video playback. There is an autoplay policy in the browser it behaves differently in a different browser. This browser policy does not allow to play video content without any user intervention. Check this URL, https://developer.chrome.com/blog/autoplay/
I think you should add a muted attribute in the video element and would be better if you ask the user to click before capturing the camera stream. May be this does not solve the problem which you are facing but this problem will be there in some browsers like iPhone. Apple devices do not allow any video content without user intervention.
Regards,
omi

Which alternatives exists for WebRTC getUserMedia given the incompatibility with IOS?

Forgive me if the question is bad, but I developed a streaming service for work using WebRTC getUserMedia on the front-end and connected it with Socket.IO on NodeJs, that has problem only with iPhones and Safari on MacOS. Looking on Stack Overflow and other forums I understood that happens because it is not compatibile. So my question is what can I use alternatively?
Do I need to use a JavaScript library like ReactJS or another?
I believe your problem lies not with getUserMedia(), but with MediaRecorder. iOS, and indeed Safari in general, doesn't handle MediaRecorder correctly. (Is Safari taking over from IE as the incompatible browser everybody loves to hate?)
I was able to hack around this problem by creating a MediaRecorder polyfill that delivers motion JPEG rather than the webm usually made when you code video. It's video-only, and generates cheezy "video" at that.
If you put the index.js file for this at /js/jpegMediaRecorder.js you can do something like this.
<script src="/js/mpegMediaRecorder.js"></script>
<script>
document.addEventListener( 'DOMContentLoaded', function () {
function handleError (error) {
console.log('getUserMedia error: ', error.message, error.name)
}
function onDataAvailableHandler (event) {
var buf = event.data || null
if (event.data && event.data instanceof Blob && event.data.size !== 0) {
/* send the data to somebody */
} }
function mediaReady (stream) {
var mediaRecorder = new MediaRecorder(stream, {
mimeType: 'image/jpeg', videoBitsPerSecond: 125000, qualityParameter: 0.9
})
mediaRecorder.addEventListener('dataavailable', datahandler)
mediaRecorder.start(10)
}
function start () {
const constraints = {
video: { width: {min: 160, ideal: 176, max: 640},
height: {min: 120, ideal: 144, max: 480},
frameRate: {min: 4, ideal: 10, max: 30} },
audio: false }
navigator.mediaDevices.getUserMedia(constraints)
.then(mediaReady)
.catch(handleError)
}
start()
})
</script>

WebRTC Errors recording video

I've tested https://webrtc.github.io/samples/src/content/getusermedia/record/, however it doesn't appear to function on iPhone, in Safari or Chrome, presenting getUserMedia errors.
The errors occur by default, I haven't changed settings, never tried this before, and it never prompts for camera access.
Clicking Start Camera:
iPhone Safari: navigator.getUserMedia error:Error: Invalid constraint
iPhone Chrome: navigator.getUserMedia error:TypeError: undefined is not an object (evaluating 'navigator.mediaDevices.getUserMedia')
Any ideas?
It sais the constraint is invalid.
This is the constraint they are using:
const constraints = {
audio: {
echoCancellation: {exact: hasEchoCancellation}
},
video: {
width: 1280, height: 720
}
};
could be echoCancellation is not supported on the safari. So maybe change the code to just
{
audio: true,
video: {
width: 1280, height: 720
}
}

Desktop Audio Capture Not working for chrome app

According to https://code.google.com/p/chromium/issues/detail?id=223639 chromium has issues with audio Loopback. and it never works in chrome app. Can anyone share some links and explanation to why is this not working?Or if it is possible? I tried below code but lot of disturbance in desktop audio.
video: {
mandatory: {
chromeMediaSource:'screen',
chromeMediaSourceId: id
}
},
audio: {
mandatory: {
chromeMediaSource: 'system',
chromeMediaSourceId: id,
}
}
Multiple streams are captured and attached to a single peer connection?
Thanks!
AFAIK only desktopCapture API are supporting audio+tab.
chromeMediaSource value must be desktop.
video: {
mandatory: {
chromeMediaSource:'desktop',
chromeMediaSourceId: 'sourceIdCapturedUsingChromeExtension'
}
},
audio: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: 'sourceIdCapturedUsingChromeExtension'
}
}
Try following demo on Chrome-Canary:
https://rtcmulticonnection.herokuapp.com/demos/Audio+ScreenSharing.html
However make sure to enable chrome://flags#tab-for-desktop-share flag.

Categories