I have a face recognition app. You press the button - the camera turns on - face recognition is in progress - done. It works well in the browser on all devices.
There is also an app for ios and android. It also has the ability to take advantage of this recognition, but only with webview.
And for some reason, in webview, the camera does not work as it should. This happens - you press the button - a modal window appears asking you to give permission to use it or something like that - the camera opens to full screen without prompts & hints that should be, etc.. If I close this window with the livestream, the correct window opens with hints, but with the camera frozen on the last frame.
const startVideo = async () => {
options = new TinyFaceDetectorOptions();
if (
navigator.mediaDevices &&
navigator.mediaDevices.getUserMedia &&
await navigator.mediaDevices.enumerateDevices()
) {
// first we call getUserMedia to trigger permissions
// we need this before deviceCount, otherwise Safari doesn't return all the cameras
// we need to have the number in order to display the switch front/back button
navigator.mediaDevices
.getUserMedia({
audio: false,
video: true
})
.then((stream: MediaStream) => {
stream.getTracks().forEach((track: MediaStreamTrack) => {
track.stop();
});
if (videoElem.current && (videoElem.current.srcObject as MediaStream)) {
videoElem.current.srcObject = null;
}
// init the UI and the camera stream
initCameraStream();
})
.catch(error => {
// https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia
if (error === 'PermissionDeniedError') {
setModalStatus([modalsNames.videoAccessError, true]);
}
if (error.name === 'NotAllowedError') {
setModalStatus([modalsNames.videoAccessError, true]);
}
});
} else {
setModalStatus([modalsNames.cameraNotSupportedError, true]);
}};
init camera
const initCameraStream = () => {
// stop any active streams in the window
if (videoElem.current && (videoElem.current.srcObject as MediaStream)) {
(videoElem.current.srcObject as MediaStream)
.getTracks()
.forEach((track: MediaStreamTrack) => {
track.stop();
});
}
if (videoElem.current && (videoElem.current.srcObject as MediaStream)) {
videoElem.current.srcObject = null;
}
// we ask for a square resolution, it will cropped on top (landscape)
// or cropped at the sides (landscape)
const sizeH = 1280;
const sizeW = 1920;
const constraints = {
audio: false,
video: {
// width: { ideal: sizeW },
// height: { ideal: sizeH },
facingMode: currentFacingMode,
// aspectRatio: { exact: 1.777777778 }
}
};
const handleSuccess = (stream: MediaStream) => {
if (videoElem.current) {
videoElem.current.srcObject = stream;
videoElem.current.onloadedmetadata = () => {
if (videoElem.current) {
onPlay();
}
};
}
};
const handleError = () => {
setModalStatus([modalsNames.cameraNotSupportedError, true]);
};
navigator.mediaDevices
.getUserMedia(constraints)
.then(handleSuccess)
.catch(handleError);
};
Related
I'm currently working on web development and using the camera on my device. I've noticed that the rear camera on my iPhone 14 Pro doesn't automatically focus properly when taking close-up shots, especially when using it on the web. Here are some actual screenshots I took while testing with my iPhone 14 Pro:
For comparison, here are the results of using the same code on an iPhone 12:
I was wondering if anyone has dealt with a similar issue. I haven't been able to find any relevant solutions online, but I have seen some people resolving this issue on other web applications, so I'm hoping there might be a way to fix this.
Here's the source code I used for testing:
<body>
<div id="contentainer">
<div class="title">get divce camera and show video</div>
<div class="video-button">
<button id="start">start</button>
<button id="stop">stop</button>
</div>
<video id="video" width="640" height="480" autoplay></video>
<select class="midia-list-select"></select>
</div>
<script>
const constraints = {
video: true,
};
function inferFacingModeFromLabel(label) {
if (label.toLowerCase().includes("back")) {
return "environment";
} else if (label.toLowerCase().includes("front")) {
return "user";
} else {
return "environment";
}
}
function inferFacingModeFromCapabilities(capabilities) {
if ((capabilities?.facingMode?.indexOf("user") ?? -1) >= 0) {
return "front";
} else if (
(capabilities?.facingMode?.indexOf("environment") ?? -1) >= 0
) {
return "back";
} else {
return undefined;
}
}
function midiaOnSelect() {
const midiaList = document.querySelector(".midia-list-select");
const midiaId = midiaList.value;
navigator.mediaDevices
.getUserMedia({
video: {
deviceId: {
exact: midiaId,
},
},
})
.then((stream) => {
console.log(stream);
const video = document.getElementById("video");
video.srcObject = stream;
});
}
function getMidiaVideoList() {
function createMidiaOption(device, facingMode) {
const option = document.createElement("option");
option.value = device.deviceId;
option.text = `${facingMode.facingModeFromapabilities}|${facingMode.facingModeFromLabel}------${device.label}`;
return option;
}
navigator.mediaDevices.enumerateDevices().then(async (devices) => {
console.log("divices", devices);
devices.forEach((device) => {
console.log("divice", device);
if (device.kind === "videoinput") {
navigator.mediaDevices
.getUserMedia({
video: {
deviceId: {
exact: device.deviceId,
},
},
})
.then((stream) => {
const track = stream.getVideoTracks()?.[0];
const capabilities = (
track?.getCapabilities ?? (() => undefined)
).bind(track)();
console.log(capabilities);
const midiaList =
document.querySelector(".midia-list-select");
const facingModeFromLabel = inferFacingModeFromLabel(
device?.label
);
const facingModeFromapabilities =
inferFacingModeFromCapabilities(capabilities);
midiaList.appendChild(
createMidiaOption(device, {
facingModeFromapabilities,
facingModeFromLabel,
})
);
});
}
});
});
}
window.onload = async function () {
const start = document.getElementById("start");
const stop = document.getElementById("stop");
const midiaList = document.querySelector(".midia-list-select");
start.addEventListener("click", () => {
midiaOnSelect();
video.play();
});
stop.addEventListener("click", () => {
video.pause();
});
midiaList.addEventListener("change", midiaOnSelect);
const video = document.getElementById("video");
const media = await navigator.mediaDevices.getUserMedia({
video: true,
});
media.getTracks().forEach(function (track) {
track.stop();
});
getMidiaVideoList();
};
</script>
</body>
Thank you all in advance for your answers and suggestions. I appreciate your help!
I am trying to first connect two WebRTC peers. Once the connection is established I want to give the users on both sides the option to enable/disable video and audio. This should happen without triggering the signaling process again.
I do run into an issue though: If I call replaceTrack(audioTack) the remote peer will not playback audio until I also call replaceTrack(video).
I am unsure why this happen and can not find any clue in the documentation. It does play fine after 10 seconds once I also attach the video track. Without video track there is no audio playback. Why?
function createVideoElement() {
const vid = document.createElement("video")
vid.width = 320;
vid.controls = true;
vid.autoplay = true;
const root = document.body;
document.body.appendChild(vid);
return vid;
}
async function RunTestInit() {
console.log("get media access");
const p1_stream_out = await navigator.mediaDevices.getUserMedia({
video: true,
audio: true
});
const p2_stream_out = await navigator.mediaDevices.getUserMedia({
video: true,
audio: true
});
console.log("stream setup");
const p1_stream_in = new MediaStream();
const p2_stream_in = new MediaStream();
const p1_video_in = createVideoElement();
const p2_video_in = createVideoElement();
console.log("peer setup");
const p1 = new RTCPeerConnection();
const p2 = new RTCPeerConnection();
const p1_tca = p1.addTransceiver("audio", {
direction: "sendrecv"
});
const p1_tcv = p1.addTransceiver("video", {
direction: "sendrecv"
});
p1.onicecandidate = (ev) => {
p2.addIceCandidate(ev.candidate);
}
p2.onicecandidate = (ev) => {
p1.addIceCandidate(ev.candidate);
}
p1.onconnectionstatechange = (ev) => {
console.log("p1 state: ", p1.connectionState);
}
p2.onconnectionstatechange = async (ev) => {
console.log("p2 state: ", p2.connectionState);
}
p1.onnegotiationneeded = () => {
//triggers once
console.warn("p1.onnegotiationneeded");
}
p2.onnegotiationneeded = () => {
//should never trigger
console.warn("p2.onnegotiationneeded");
}
p1.ontrack = (ev) => {
console.log("p1.ontrack", ev);
p1_stream_in.addTrack(ev.track);
p1_video_in.srcObject = p1_stream_in;
}
p2.ontrack = (ev) => {
console.log("p2.ontrack", ev);
p2_stream_in.addTrack(ev.track);
p2_video_in.srcObject = p2_stream_in;
}
console.log("signaling");
const offer = await p1.createOffer();
await p1.setLocalDescription(offer);
await p2.setRemoteDescription(offer);
const p2_tca = p2.getTransceivers()[0];
const p2_tcv = p2.getTransceivers()[1];
p2_tca.direction = "sendrecv"
p2_tcv.direction = "sendrecv"
const answer = await p2.createAnswer();
await p2.setLocalDescription(answer);
await p1.setRemoteDescription(answer);
console.log("signaling done");
//send audio from p2 to p1 (direction doesn't matter)
//after this runs nothing will happen and no audio plays
setTimeout(async () => {
await p2_tca.sender.replaceTrack(p2_stream_out.getAudioTracks()[0]);
console.warn("audio playback should start now but nothing happens");
}, 1000);
//audio starts playing once this runs
setTimeout(async () => {
//uncomment this and it works just fine
await p2_tcv.sender.replaceTrack(p2_stream_out.getVideoTracks()[0]);
console.warn("now audio playback starts");
}, 10000);
}
function start() {
setTimeout(async () => {
console.log("Init test case");
await RunTestInit();
}, 1);
}
Same example in the js fiddle (needs camera and microphone access):
https://jsfiddle.net/vnztcx5p/5/
Once audio works this will cause an echo.
that is a known issue. https://bugs.chromium.org/p/chromium/issues/detail?id=813243 and https://bugs.chromium.org/p/chromium/issues/detail?id=403710 have some background information.
In a nutshell the video element expect you to send audio and video data and these need to be synchronized. But you don't send any video data and the element needs to fire a loadedmetadata and resize event because that is what the specification says. Hence it will block audio indefinitely
You can enable/disable audio and video tracks, so you dont have to renegotiate. Note that this tracks have to be added before negotiation starts. You can achieve it with:
mediaStream.getAudioTracks()[0].enabled = false; // or true to enable it.
Or if you want to disable video:
mediaStream.getVideoTracks()[0].enabled = false; // or true to enable it.
Here is the documentation
getAudioTracks()
getVideoTracks()
I got this working. It looks like more a problem with how HTMLVideoElement works rather than WebRTC.
If I set
p1_video_in.srcObject = p1_stream_in;
p2_video_in.srcObject = p2_stream_in;
before I add the tracks to the stream it works.
Complete example looks like this:
function createVideoElement() {
const vid = document.createElement("video")
vid.width = 320;
vid.controls = true;
vid.autoplay = true;
const root = document.body;
document.body.appendChild(vid);
return vid;
}
async function RunTestInit() {
console.log("get media access");
const p1_stream_out = await navigator.mediaDevices.getUserMedia({
video: true,
audio: true
});
const p2_stream_out = await navigator.mediaDevices.getUserMedia({
video: true,
audio: true
});
console.log("stream setup");
const p1_stream_in = new MediaStream();
const p2_stream_in = new MediaStream();
const p1_video_in = createVideoElement();
const p2_video_in = createVideoElement();
p1_video_in.srcObject = p1_stream_in;
p2_video_in.srcObject = p2_stream_in;
console.log("peer setup");
const p1 = new RTCPeerConnection();
const p2 = new RTCPeerConnection();
const p1_tca = p1.addTransceiver("audio", {
direction: "sendrecv"
});
const p1_tcv = p1.addTransceiver("video", {
direction: "sendrecv"
});
p1.onicecandidate = (ev) => {
p2.addIceCandidate(ev.candidate);
}
p2.onicecandidate = (ev) => {
p1.addIceCandidate(ev.candidate);
}
p1.onconnectionstatechange = (ev) => {
console.log("p1 state: ", p1.connectionState);
}
p2.onconnectionstatechange = async (ev) => {
console.log("p2 state: ", p2.connectionState);
}
p1.onnegotiationneeded = () => {
//triggers once
console.warn("p1.onnegotiationneeded");
}
p2.onnegotiationneeded = () => {
//should never trigger
console.warn("p2.onnegotiationneeded");
}
p1.ontrack = (ev) => {
console.log("p1.ontrack", ev);
p1_stream_in.addTrack(ev.track);
}
p2.ontrack = (ev) => {
console.log("p2.ontrack", ev);
p2_stream_in.addTrack(ev.track);
}
console.log("signaling");
const offer = await p1.createOffer();
await p1.setLocalDescription(offer);
await p2.setRemoteDescription(offer);
const p2_tca = p2.getTransceivers()[0];
const p2_tcv = p2.getTransceivers()[1];
p2_tca.direction = "sendrecv"
p2_tcv.direction = "sendrecv"
const answer = await p2.createAnswer();
await p2.setLocalDescription(answer);
await p1.setRemoteDescription(answer);
console.log("signaling done");
//send audio from p2 to p1 (direction doesn't matter)
//after this runs nothing will happen and no audio plays
setTimeout(async () => {
await p2_tca.sender.replaceTrack(p2_stream_out.getAudioTracks()[0]);
console.warn("audio playback should start now but nothing happens");
}, 1000);
//audio starts playing once this runs
setTimeout(async () => {
//uncomment this and it works just fine
await p2_tcv.sender.replaceTrack(p2_stream_out.getVideoTracks()[0]);
console.warn("now audio playback starts");
}, 10000);
}
function start() {
setTimeout(async () => {
console.log("Init test case");
await RunTestInit();
}, 1);
}
I am working on a project where I need the user to be able to record screen, audio and microphone. At the moment I could only make it recognize screen and audio.
First I am capturing the screen and the audio from it and saving it to a variable. And then I am capturing that variable to show the a video component.
invokeGetDisplayMedia(success, error) {
let displaymediastreamconstraints = {
video: {
displaySurface: 'monitor', // monitor, window, application, browser
logicalSurface: true,
cursor: 'always' // never, always, motion
}
};
// above constraints are NOT supported YET
// that's why overridnig them
displaymediastreamconstraints = {
video: true,
audio:true
};
if (navigator.mediaDevices.getDisplayMedia) {
navigator.mediaDevices.getDisplayMedia(displaymediastreamconstraints).then(success).catch(error);
}
else {
navigator.getDisplayMedia(displaymediastreamconstraints).then(success).catch(error);
}
},
captureScreen(callback) {
this.invokeGetDisplayMedia((screen) => {
this.addStreamStopListener(screen, () => {
//
});
callback(screen);
}, function (error) {
console.error(error);
alert('Unable to capture your screen. Please check console logs.\n' + error);
});
},
startRecording() {
this.captureScreen(screen=>{
this.audioStream = audio
console.log(audio)
this.video=this.$refs.videoScreen
this.video.srcObject = screen;
this.recorder = RecordRTC(screen, {
type: 'video'
});
this.recorder.startRecording();
// release screen on stopRecording
this.recorder.screen = screen;
this.videoStart = true;
});
},
I fixed it by increasing a function where I capture the audio from the microphone
captureAudio(success, error) {
let displayuserstreamconstraints = {
audio:true
};
if (navigator.mediaDevices.getUserMedia) {
navigator.mediaDevices.getUserMedia(displayuserstreamconstraints).then(success).catch(error);
}
else {
navigator.getUserMedia(displayuserstreamconstraints).then(success).catch(error);
}
},
And adding a function in the startRecording method
startRecording() {
this.captureAudio((audio) => {
this.captureScreen(screen=>{
this.video=this.$refs.videoScreen
this.audioStream=audio
this.video.srcObject = screen;
this.recorder = RecordRTC(screen, {
type: 'video'
});
this.recorder.startRecording();
// release screen on stopRecording
this.recorder.screen = screen;
this.videoStart = true;
});
})
},
And adding a function in the stopRecording method
stopRecordingCallback() {
this.video.src = this.video.srcObject = null;
this.video=this.$refs.videoScreen
this.video.src = URL.createObjectURL(this.recorder.getBlob());
this.recorder.screen.stop();
this.audioStream.stop();
this.recorder.destroy();
this.recorder = null;
},
pc.ontrack = (e) => {
let _remoteStream = null
let remoteStreams = this.state.remoteStreams
let remoteVideo = {}
// 1. check if stream already exists in remoteStreams
const rVideos = this.state.remoteStreams.filter(stream => stream.id === socketID)
// 2. if it does exist then add track
if (rVideos.length) {
_remoteStream = rVideos[0].stream
_remoteStream.addTrack(e.track, _remoteStream)
remoteVideo = {
...rVideos[0],
stream: _remoteStream,
}
remoteStreams = this.state.remoteStreams.map(_remoteVideo => {
return _remoteVideo.id === remoteVideo.id && remoteVideo || _remoteVideo
})
} else {
// 3. if not, then create new stream and add track
_remoteStream = new MediaStream()
_remoteStream.addTrack(e.track, _remoteStream)
remoteVideo = {
id: socketID,
name: socketID,
stream: _remoteStream,
}
remoteStreams = [...this.state.remoteStreams, remoteVideo]
}
// const remoteVideo = {
// id: socketID,
// name: socketID,
// stream: e.streams[0]
// }
this.setState(prevState => {
// If we already have a stream in display let it stay the same, otherwise use the latest stream
// const remoteStream = prevState.remoteStreams.length > 0 ? {} : { remoteStream: e.streams[0] }
const remoteStream = prevState.remoteStreams.length > 0 ? {} : { remoteStream: _remoteStream }
// get currently selected video
let selectedVideo = prevState.remoteStreams.filter(stream => stream.id === prevState.selectedVideo.id)
// if the video is still in the list, then do nothing, otherwise set to new video stream
selectedVideo = selectedVideo.length ? {} : { selectedVideo: remoteVideo }
return {
// selectedVideo: remoteVideo,
...selectedVideo,
// remoteStream: e.streams[0],
...remoteStream,
remoteStreams, //: [...prevState.remoteStreams, remoteVideo]
}
})
}
screenshare.onclick=function(){
navigator.mediaDevices.getDisplayMedia(constraints)
.then(success)
.catch(failure)
}
this is my pc.ontrack code
and I added this button event to people can covert local streams to screen streams.
In aspect of current peer, the stream is changed.
How can I other people can see one people's screen sharing?
this is my pc.ontrack code
and I added this button event to people can covert local streams to screen streams.
In aspect of current peer, the stream is changed.
How can I other people can see one people's screen sharing?
I am getting this error when trying to access my website on an iPhone 7, with a white bank screen (the main screen loads fine, but then I get this at the net screen after I click something.
I assume this is what it's talking about:
useEffect(() => {
navigator.permissions
.query({ name: "microphone" })
.then((permissionStatus) => {
setMicrophonePermissionGranted(permissionStatus.state === "granted");
permissionStatus.onchange = function () {
setMicrophonePermissionGranted(this.state === "granted");
};
});
navigator.permissions.query({ name: "camera" }).then((permissionStatus) => {
setCameraPermissionGranted(permissionStatus.state === "granted");
permissionStatus.onchange = function () {
setCameraPermissionGranted(this.state === "granted");
};
});
}, []);
How do I fix this?
You need to check permission APIs availability and then if not available - query standard APIs.
Here is the location example:
Permissions API
Navigation API
if ( navigator.permissions && navigator.permissions.query) {
//try permissions APIs first
navigator.permissions.query({ name: 'geolocation' }).then(function(result) {
// Will return ['granted', 'prompt', 'denied']
const permission = result.state;
if ( permission === 'granted' || permission === 'prompt' ) {
_onGetCurrentLocation();
}
});
} else if (navigator.geolocation) {
//then Navigation APIs
_onGetCurrentLocation();
}
function _onGetCurrentLocation () {
navigator.geolocation.getCurrentPosition(function(position) {
//imitate map latlng construct
const marker = {
lat: position.coords.latitude,
lng: position.coords.longitude
};
})
}
Permissions.query() is marked as an experimental feature as of June 2021 https://developer.mozilla.org/en-US/docs/Web/API/Permissions/query.
As of today, that traduces into that you'll need to implement two UIs / flows; one capable of supporting fancy flows to tell the user how to proceed, and the other one more standard, using try / catch blocks. Something like:
useEffect(() => {
requestPermissions();
}, []);
const requestPermissions = async () => {
try {
handlePermissionsGranted();
const stream = await navigator.mediaDevices.getUserMedia({ audio: true, video: true });
startRecording();
} catch {
...
}
};
const handlePermissionsGranted = async () => {
if (navigator.permissions && navigator.permissions.query) {
const permissions = await navigator.permissions.query({name: 'microphone'});
permissions.onchange = () => {
setMicrophonePermissionGranted(permissions === 'granted');
};
}
};
const startRecording = async () => {
try {
const stream = await navigator.mediaDevices.getUserMedia({ audio: true, video: false });
const mediaRecorder = new MediaRecorder(stream, { mimeType: 'audio/webm' });
...
} catch {
... << if you reach this catch means that either the browser does not support webrtc or that the user didn't grant permissions
}
};
I was trying to check for the mic and camera permissions from iOs devices and through the Facebook browser, which I guess makes the whole thing fail, as these don't exist in those environments.
Once I've moved that query to the component that only loads if it is not a mobile device, my error fixed.