Safari getUserMedia() Unhandled Promise Rejection - javascript

As advised in internet i added the muted and playsinline attributes to my video element. I still cant get a vision in Safari 11 but only this error.
I also tried to remove autoplay from my video element.
Unhandled Promise Rejection: TypeError: Type error
Is it possible to get webrtc working in Safari 11 or am i losing time with this?
getUserMedia() works on all other browsers (Chrome,Firefox,Edge,Opera).
Thank you!
I use this shim, https://github.com/addyosmani/getUserMedia.js/blob/gh-pages/lib/getUserMedia.js this returns a success callback,
Then in the callback,
var video = camOptions.videoEl; //the video element
var vendorURL = window.URL || window.webkitURL;
try {
video.src = vendorURL ? vendorURL.createObjectURL(stream) : stream;
}
catch(err) {
//HERE IS THE TYPE ERROR IN SAFARI
}

The TypeError you are getting is because you are passing the wrong constraints when calling GetUserMedia. This error happens when you pass constrains which are not recorgnized by the device (browser) or have invalid values.
Also, you need to use video.srcObject and not video.src which is deprecated.
Here's a working example for Safari. Keep in mind this only works on iOS 11 and above:
// Get the <video> element
var video = document.getElementById('vid');
// Default constrains
var constraints = { audio: true, video: true };
navigator.mediaDevices.getUserMedia(constraints).then(handleSuccess);
var handleSuccess = function (stream) {
video.srcObject = stream;
};

Better define handleSuccess callback before calling navigator.mediaDevices.getUserMedia, I stuck here for some minutes.🤣

Related

video.play() occurred unhandled rejection (notallowederror) on IOS

using peer.js for stream video on React APP
addVideoStream(video: HTMLVideoElement, stream: MediaStream) {
video.srcObject = stream
video?.addEventListener('loadedmetadata', () => {
video.play()
})
if (this.videoGrid) this.videoGrid.append(video)
}
got this error at 'video.play()'
the request is not allowed by the user agent or the platform in the current context
already I allowed permission for Audio and video on IOS.
this code works well other platforms except IOS.
I have no idea.
If I deploy then I just get black screen on IOS.
how can I fix this?
thanks in advance
the problem was how video tag works in IOS with WebRTC.
used HTTPS environment(production) then add these attributes
if (isMobile && isSafari) {
this.myVideo.playsInline = true
this.myVideo.autoplay = true
}
then it works.

navigator.mediaDevices in microsoft edge mobile ios 13.3.1

has anyone tried to capture video from iphone camera on microsoft edge mobile browser? does it work? navigator.mediaDevices returns me undefined and I'm wondering if that browser doesn't support mediaDevices API at all, or it`s just a camera access issue.
Please check this article, if the current document isn't loaded securely or if using the new MediaDevices API in older browsers, the navigator.mediaDevices might be undefined. So, try to check the browser version and clear the browser data, and then retest the code.
Besides, before using navigator.mediaDevices, you could try to add the following polyfill:
// Older browsers might not implement mediaDevices at all, so we set an empty object first
if (navigator.mediaDevices === undefined) {
navigator.mediaDevices = {};
}
// Some browsers partially implement mediaDevices. We can't just assign an object
// with getUserMedia as it would overwrite existing properties.
// Here, we will just add the getUserMedia property if it's missing.
if (navigator.mediaDevices.getUserMedia === undefined) {
navigator.mediaDevices.getUserMedia = function(constraints) {
// First get ahold of the legacy getUserMedia, if present
var getUserMedia = navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
// Some browsers just don't implement it - return a rejected promise with an error
// to keep a consistent interface
if (!getUserMedia) {
return Promise.reject(new Error('getUserMedia is not implemented in this browser'));
}
// Otherwise, wrap the call to the old navigator.getUserMedia with a Promise
return new Promise(function(resolve, reject) {
getUserMedia.call(navigator, constraints, resolve, reject);
});
}
}
navigator.mediaDevices.getUserMedia({ audio: true, video: true })
.then(function(stream) {
var video = document.querySelector('video');
// Older browsers may not have srcObject
if ("srcObject" in video) {
video.srcObject = stream;
} else {
// Avoid using this in new browsers, as it is going away.
video.src = window.URL.createObjectURL(stream);
}
video.onloadedmetadata = function(e) {
video.play();
};
})
.catch(function(err) {
console.log(err.name + ": " + err.message);
});
I have reproduced the problem on IOS 13.4 version and using Microsoft Edge 44.13.7 version, after using above polyfill, this error disappears.

RecordRTC issue - stream only shows timestamp?

I'm rather new to the whole WebRTC thing, and I've been reading a ton of articles and about different APIs about how to handle video recording. It seems the more I read the more confusing the whole thing is. I know I can use solutions such as Nimbb, but I'd rather keep everything "in house", so to speak.
The way I've got the code right now, the page loads and the user clicks a button to determine the type of input (text or video). When the video button is clicked, the webcam is initialized and turned on to record. However, the stream from the webcam doesn't show up in the page itself. It seems this is because the src of the video is actually an object. The weird thing is that when I try to get more info about the object by logging to console, I only get an object attribute called currentTime. How does this object create the actual source for the video element? I've tried many different variations of the code below to no avail, so I'm just wondering what I'm doing wrong.
var playerId = 'cam-'+t+'-'+click[1]+'-'+click[2];
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia;
if(navigator.getUserMedia){
function onSuccess(stream){
var video = document.getElementById(playerId);
var vidSource;
if(window.webkitURL || window.URL){
vidSource = (window.webkitURL) ? window.webkitURL.createObjectURL(stream) : window.URL.createObjectURL(stream);
}else{
vidSource = stream;
}
video.autoplay = true;
video.src = vidSource;
}
function onError(e){
console.error('Error: ', e);
}
navigator.getUserMedia({video: true, audio: true}, onSuccess, onError);
}else{
//flash alternative
}
The webkit check actually was the problem as pointed out by mido22 in the comments

What's wrong with my code to record audio in HTML5?

I'm trying to record audio via a microphone with the latest Chrome beta (Version 21.0.1180.15). It seems that almost everything to do it is implemented in Chrome beta now. I even get access to the microphone. Though I can't connect the stream with an audio element. But to my understanding it should work if there is no bug.
createMediaStreamSource() is not yet implemented. As a work around I want to use createMediaElementSource() to route the audio from the microphone through a muted audio element.
Using the code below I get one of these two error message in the console:
GET blob:file%3A///625fd498-f427-43d5-959b-3b49c6d53ab5 404 (Not
Found)
or
Not allowed to load local resource:
blob:null/8df582cc-b663-489b-bf49-1785226fc7b7
The error is caused by this line:
audio.src = window.webkitURL.createObjectURL(stream)
Is there something wrong with this line? How to connect the stream to the audio element source? Or is it a bug in Chrome that makes it impossible to create an object URL?
Code:
var context = null;
var elementSource = null;
function onError(e) {
if (e.code == 1) {
alert('User denied access to their camera');
} else {
alert('getUserMedia() not supported by your browser');
}
}
window.addEventListener('load', initAudio, false);
function initAudio() {
navigator.webkitGetUserMedia({audio:true}, function (stream) {
var audio = document.querySelector('#basic-stream');
audio.src = window.webkitURL.createObjectURL(stream);
audio.controls = true;
context = new webkitAudioContext();
elementSource = context.createMediaElementSource(audio);
elementSource.connect(context.destination);
}, onError);
}
<div>
audio id="basic-stream" class="audiostream" autoplay muted></audio>
</div>
If it isn't absolutely necessary, please don't re-invent the square wheel: https://github.com/mattdiamond/Recorderjs
I'm not sure if this is related, but there is an outstanding issue regarding getUserMedia() with audio.
http://code.google.com/p/chromium/issues/detail?id=112367

Is HTML5's getUserMedia for audio recording working now?

I had searched a lot of DEMO and examples about getUserMedia , but most are just camera capturing, not microphone.
So I downloaded some examples and tried on my own computer , camera capturing is work ,
But when I changed
navigator.webkitGetUserMedia({video : true},gotStream);
to
navigator.webkitGetUserMedia({audio : true},gotStream);
The browser ask me to allow microphone access first, and then it failed at
document.getElementById("audio").src = window.webkitURL.createObjectURL(stream);
The message is :
GET blob:http%3A//localhost/a5077b7e-097a-4281-b444-8c1d3e327eb4 404 (Not Found)
This is my code: getUserMedia_simple_audio_test
Did I do something wrong? Or only getUserMedia can work for camera now ?
It is currently not available in Google Chrome. See Issue 112367.
You can see in the demo, it will always throw an error saying
GET blob:http%3A//whatever.it.is/b0058260-9579-419b-b409-18024ef7c6da 404 (Not Found)
And also you can't listen to the microphone either in
{
video: true,
audio: true
}
It is currently supported in Chrome Canary. You need to type about:flags into the address bar then enable Web Audio Input.
The following code connects the audio input to the speakers. WATCH OUT FOR THE FEEDBACK!
<script>
// this is to store a reference to the input so we can kill it later
var liveSource;
// creates an audiocontext and hooks up the audio input
function connectAudioInToSpeakers(){
var context = new webkitAudioContext();
navigator.webkitGetUserMedia({audio: true}, function(stream) {
console.log("Connected live audio input");
liveSource = context.createMediaStreamSource(stream);
liveSource.connect(context.destination);
});
}
// disconnects the audio input
function makeItStop(){
console.log("killing audio!");
liveSource.disconnect();
}
// run this when the page loads
connectAudioInToSpeakers();
</script>
<input type="button" value="please make it stop!" onclick="makeItStop()"/>
(sorry, I forgot to login, so posting with my proper username...)
It is currently supported in Chrome Canary. You need to type about:flags into the address bar then enable Web Audio Input.
The following code connects the audio input to the speakers. WATCH OUT FOR THE FEEDBACK!
http://jsfiddle.net/2mLtM/
<script>
// this is to store a reference to the input so we can kill it later
var liveSource;
// creates an audiocontext and hooks up the audio input
function connectAudioInToSpeakers(){
var context = new webkitAudioContext();
navigator.webkitGetUserMedia({audio: true}, function(stream) {
console.log("Connected live audio input");
liveSource = context.createMediaStreamSource(stream);
liveSource.connect(context.destination);
});
}
// disconnects the audio input
function makeItStop(){
console.log("killing audio!");
liveSource.disconnect();
}
// run this when the page loads
connectAudioInToSpeakers();
</script>
<input type="button" value="please make it stop!" onclick="makeItStop()"/>
It's working, you just need to add toString parameter after audio : true
Check this article - link

Categories