I've tested https://webrtc.github.io/samples/src/content/getusermedia/record/, however it doesn't appear to function on iPhone, in Safari or Chrome, presenting getUserMedia errors.
The errors occur by default, I haven't changed settings, never tried this before, and it never prompts for camera access.
Clicking Start Camera:
iPhone Safari: navigator.getUserMedia error:Error: Invalid constraint
iPhone Chrome: navigator.getUserMedia error:TypeError: undefined is not an object (evaluating 'navigator.mediaDevices.getUserMedia')
Any ideas?
It sais the constraint is invalid.
This is the constraint they are using:
const constraints = {
audio: {
echoCancellation: {exact: hasEchoCancellation}
},
video: {
width: 1280, height: 720
}
};
could be echoCancellation is not supported on the safari. So maybe change the code to just
{
audio: true,
video: {
width: 1280, height: 720
}
}
Related
While using MediaRecorder in Safari the size of video recorder is almost 15x greater than in Chrome for the same amount of time. Is there any way where we can reduce the final recorded size of the video
I tried changing pixel size of the video
await navigator.mediaDevices.getUserMedia({
video: {
width: { exact: 320 },
height: { exact: 240 }
}, audio: true
});
but still the size of the video is same
I'm developing an application in Angular to capture a photo with the maximum possible quality depending on the device's camera.
Currently, I have this code:
HTML:
<video #video id="video" autoplay muted playsinline></video>
Angular TS:
_requestCameraStream(width: number, height: number, secondTry: boolean) {
if (navigator.mediaDevices.getUserMedia)
{
navigator.mediaDevices
.getUserMedia({
video: {
facingMode: 'environment',
width: { ideal: width },
height: { ideal: height },
frameRate: { ideal: 30 },
},
})
.then(stream => {
console.log('_getUserMedia -> stream loaded');
this.loadStream(stream);
})
.catch(err => {
console.log(err);
if (!secondTry)
{
console.log('Started second try');
this._requestCameraStream(2560, 1440, true);
}
else
{
this.router.navigateByUrl('id-document/camera-error');
}
});
}
}
private loadStream(stream: MediaStream) {
const videoDevices = stream.getVideoTracks();
this.lastStream = stream;
this.video!.nativeElement.srcObject = stream;
this.video!.nativeElement.play();
this.ref.detectChanges();
}
Basically I check if the device has a camera available and try to load it with the width and height values that the function receives. On the first try I call the function as follows:
this._requestCameraStream(4096, 2160, false);
If the stream fails to load (probably because the camera does not support 4k quality) then it tries again with the values this._requestCameraStream(2560, 1440, true);
This is actually working pretty well on most devices, but on a Galaxy Note 10 Plus, the stream does not load, but if I click the button to take the picture, the camera does capture the image in 4k quality.
I suspect that the camera has a higher resolution than the screen, so the camera can capture a 4k image, but the screen can't load a 4k video as a preview. The problem is: the system does not trigger any warning or errors that I could capture. It is as if the preview loaded successfully.
How can I detect and treat this error? Or maybe, is there any other way that I can request the camera to capture a maximum quality image with the preview loading correctly?
You can try defining a range of resolutions instead of trying only two
async _requestCameraStream() {
if (!navigator.mediaDevices.getUserMedia) return;
try {
const stream = await navigator.mediaDevices
.getUserMedia({
video: {
facingMode: {
exact: "environment"
},
width: { min: 2288, ideal: 4096, max: 4096 },
height: { min: 1080, ideal: 2160, max: 2160 },
frameRate: { ideal: 30 },
},
});
if(stream) {
console.log('_getUserMedia -> stream loaded');
this.loadStream(stream);
}
}catch (err) {
console.log(err);
this.router.navigateByUrl('id-document/camera-error');
}
}
I think your current approach is correct to capture the maximum quality image. I have used a similar approach in one of my projects. I think the problem is with the video playback. There is an autoplay policy in the browser it behaves differently in a different browser. This browser policy does not allow to play video content without any user intervention. Check this URL, https://developer.chrome.com/blog/autoplay/
I think you should add a muted attribute in the video element and would be better if you ask the user to click before capturing the camera stream. May be this does not solve the problem which you are facing but this problem will be there in some browsers like iPhone. Apple devices do not allow any video content without user intervention.
Regards,
omi
Forgive me if the question is bad, but I developed a streaming service for work using WebRTC getUserMedia on the front-end and connected it with Socket.IO on NodeJs, that has problem only with iPhones and Safari on MacOS. Looking on Stack Overflow and other forums I understood that happens because it is not compatibile. So my question is what can I use alternatively?
Do I need to use a JavaScript library like ReactJS or another?
I believe your problem lies not with getUserMedia(), but with MediaRecorder. iOS, and indeed Safari in general, doesn't handle MediaRecorder correctly. (Is Safari taking over from IE as the incompatible browser everybody loves to hate?)
I was able to hack around this problem by creating a MediaRecorder polyfill that delivers motion JPEG rather than the webm usually made when you code video. It's video-only, and generates cheezy "video" at that.
If you put the index.js file for this at /js/jpegMediaRecorder.js you can do something like this.
<script src="/js/mpegMediaRecorder.js"></script>
<script>
document.addEventListener( 'DOMContentLoaded', function () {
function handleError (error) {
console.log('getUserMedia error: ', error.message, error.name)
}
function onDataAvailableHandler (event) {
var buf = event.data || null
if (event.data && event.data instanceof Blob && event.data.size !== 0) {
/* send the data to somebody */
} }
function mediaReady (stream) {
var mediaRecorder = new MediaRecorder(stream, {
mimeType: 'image/jpeg', videoBitsPerSecond: 125000, qualityParameter: 0.9
})
mediaRecorder.addEventListener('dataavailable', datahandler)
mediaRecorder.start(10)
}
function start () {
const constraints = {
video: { width: {min: 160, ideal: 176, max: 640},
height: {min: 120, ideal: 144, max: 480},
frameRate: {min: 4, ideal: 10, max: 30} },
audio: false }
navigator.mediaDevices.getUserMedia(constraints)
.then(mediaReady)
.catch(handleError)
}
start()
})
</script>
In latest Samsung s10 and s20 phones, I facing back-camera blocked issue on browser while accessing using navigator.mediaDevices.getUserMedia javascript. But able to access front-camera successfully. These s10 and s20 mobile have 3plus back-cameras.
Note: it worked well on Samsung s9 for both front and back camera, I believe it has single back camera, so no camera access issue in s9.
Below is the simple JS code used to access back and front camera.
navigator.mediaDevices.getUserMedia({
video: {
width: screen.width > ipad_size ? 1280 : { ideal: 640 },
height: screen.width > ipad_size ? 720 : { ideal: 480 },
facingMode: method == 2 ? "user" : { exact: "environment" },
},
})
.then(function (stream){
console.log("Access camera: ");
})
.catch(function (err) {
console.log("Unable to access camera: " + err);
});
facingMode: method == 2 ? "user" : "environment",
is what I recommend.
I am unable to get desktop picker dialog for available sources. I am newbie can someone guide me what am I missing? In chrome we use "chrome.desktopCapture.chooseDesktopMedia"? I obtained source from below code.
function onAccessApproved(error, sources) {
if (error) throw error;
for (var i = 0; i < sources.length; ++i) {
{
navigator.webkitGetUserMedia({
audio: false,
video: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: sources[i].id,
minWidth: 1280,
maxWidth: 1280,
minHeight: 720,
maxHeight: 720
}
}
}, gotShareStream, errorCallback);
return;
}
I have tried Option link but I am getting BrowserWindow undefined error.
Thanks!
I haven't used electron, but in WebRTC you need to use something like this video: {optional: [{sourceId: source.id}]}. And don't do this for all the sources - do this only to get a stream from that source.
To get the available sources use navigator.mediaDevices.enumerateDevices() and then filter them by kind which can be audioinput, audiooutput, videoinput and videooutput.