In XRViewer on iOS, I'm trying to set up a basic A-Frame scene in AR and a stream from getUserMedia.
I only have a video element and an a-scene. On enter-vr, I attempt to start a stream from getUserMedia.
On pressing the "AR" button, as expected, it prompts me to allow camera access. But when I allow it, I get a pop-up that an "AR Interuption Occured" and everything freezes.
Is getUserMedia expected to work while in AR? Is there another way to access the passthrough stream? getUserMedia does work as expected if I don't enter AR.
Here is my code:
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1" />
<script src="https://cdn.jsdelivr.net/gh/aframevr/aframe#54022b80c60b12dc755f4f4a71a779b410dd23d0/dist/aframe-master.min.js"></script>
<script>
function start() {
const videoElement = document.querySelector("#video");
videoElement.setAttribute("autoplay", "");
videoElement.setAttribute("muted", "");
videoElement.setAttribute("playsinline", "");
videoElement.style.position = "absolute";
videoElement.style.top = "0px";
videoElement.style.left = "0px";
videoElement.style.zIndex = "-2";
videoElement.style.display = "none";
navigator.mediaDevices
.getUserMedia({
audio: false,
video: {
facingMode: { exact: "user" },
},
})
.then((stream) => {
videoElement.addEventListener("loadedmetadata", () => {
videoElement.setAttribute("width", videoElement.videoWidth);
videoElement.setAttribute("height", videoElement.videoHeight);
});
videoElement.srcObject = stream;
})
.catch((err) => {
console.error("no video" + err);
});
}
document.addEventListener("DOMContentLoaded", function () {
document
.querySelector("#scene")
.addEventListener("enter-vr", function () {
start();
});
});
</script>
</head>
<body>
<a-scene
id="scene"
device-orientation-permission-ui="enabled: false"
stats
></a-scene>
<video id="video"></video>
</body>
</html>
Related
I am trying to record screen with videos playing on screen + microphone of user.
See demo: https://jsfiddle.net/4z447wpn/5/
Code below:
<!DOCTYPE html>
<html>
<head>
<title>Screen recording using RecordRTC</title>
<style>
html, body{
margin: 0!important;
padding: 0!important;
width: 100%;
height: 100%;
}
</style>
</head>
<body>
<video controls autoplay height="600" width="800" style="float: left; margin-top: 20px"></video>
<iframe width="420" height="315" style="float: right; margin-top: 20px"
src="https://www.youtube.com/embed/9Zr2jjg1X-U">
</iframe>
<script src="https://cdn.webrtc-experiment.com/RecordRTC.js"></script>
<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>
<script src="https://cdn.WebRTC-Experiment.com/getScreenId.js"></script>
<script>
function captureScreen(cb) {
getScreenId(function (error, sourceId, screen_constraints) {
navigator.mediaDevices.getUserMedia(screen_constraints).then(cb).catch(function(error) {
console.error('getScreenId error', error);
alert('Failed to capture your screen. Please check browser console logs for further information.');
});
});
}
function captureAudio(cb) {
navigator.mediaDevices.getUserMedia({audio: true, video: false}).then(cb);
}
function keepStreamActive(stream) {
var video = document.createElement('video');
video.muted = true;
setSrcObject(stream, video);
video.style.display = 'none';
(document.body || document.documentElement).appendChild(video);
}
captureScreen(function(screen) {
keepStreamActive(screen);
captureAudio(function(mic) {
keepStreamActive(mic);
screen.width = window.screen.width;
screen.height = window.screen.height;
screen.fullcanvas = true;
var recorder = RecordRTC([screen, mic], {
type: 'video',
mimeType: 'video/webm',
previewStream: function(s) {
document.querySelector('video').muted = true;
setSrcObject(s, document.querySelector('video'));
}
});
//Start recording
recorder.startRecording();
//Stop recording after specific seconds
setTimeout(function() {
recorder.stopRecording(function() {
var blob = recorder.getBlob();
document.querySelector('video').src = URL.createObjectURL(blob);
document.querySelector('video').muted = false;
screen.getVideoTracks().forEach(function(track) {
track.stop();
});
screen.getAudioTracks().forEach(function(track) {
track.stop();
});
mic.getVideoTracks().forEach(function(track) {
track.stop();
});
mic.getAudioTracks().forEach(function(track) {
track.stop();
});
});
}, 20 * 1000);
});
});
</script>
</body>
</html>
Notes:
(1) Play iframe video(loaded on right-side) quickly after you allow access of browser screen and microphone, so it will start recording everything and it will auto stop after 20 seconds and will play recorded video. Pause right-side video to listen recorded sound.
(2) Chrome user needs to install extension: https://chrome.google.com/webstore/detail/screen-capturing/ajhifddimkapgcifgcodmmfdlknahffk
Problems I face:
(1) It is not recording sounds playing in videos on screen. Though it captures full screen with microphone of user.
(2) If I select current screen as screen capture window, it shows same screen in loop.
See problems in image :
Your demo works on localhost or on a non-iframe based HTTPs website after passing "second" parameter over getScreenId e.g.
getScreenId(callback, true);
Where second argument i.e. boolean true enabled the speakers.
Note: If still doesn't works then test on incognito to ignore/bypass cache.
Note2: Test on localhost or non-iframe HTTPs website i.e. on your own domain instead of jsfiddle.
Updated the answer on Tuesday May 08 2018
Please try this code:
<!DOCTYPE html>
<html>
<head>
<title>Screen recording using RecordRTC</title>
<style>
html, body{
margin: 0!important;
padding: 0!important;
width: 100%;
height: 100%;
}
</style>
</head>
<body>
<button class="btn btn-primary" id="stoprecording">STOP RECORDING</button>
<video id="preview-screen" controls autoplay height="600" width="800" style="float: left; margin-top: 20px"></video>
<video width="420" height="315" controls="" autoplay="" loop="" style="float: right; margin-top: 20px" onloadedmetadata="typeof OnLoadedMetaData === 'function' ? OnLoadedMetaData() : setTimeout(function() {OnLoadedMetaData();}, 3000);">
<source src="https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4" type="video/mp4">
</video>
<script src="https://cdn.webrtc-experiment.com/RecordRTC.js"></script>
<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>
<script src="https://cdn.WebRTC-Experiment.com/getScreenId.js"></script>
<script>
function captureScreen(cb) {
getScreenId(function (error, sourceId, screen_constraints) {
navigator.mediaDevices.getUserMedia(screen_constraints).then(cb).catch(function(error) {
console.error('getScreenId error', error);
alert('Failed to capture your screen. Please check browser console logs for further information.');
});
}, true);
}
function captureAudio(cb) {
navigator.mediaDevices.getUserMedia({audio: true, video: false}).then(cb);
}
function keepStreamActive(stream) {
var video = document.createElement('video');
video.muted = true;
setSrcObject(stream, video);
video.style.display = 'none';
(document.body || document.documentElement).appendChild(video);
}
var recorder = '';
var screenRec = '';
var micRec = '';
function OnLoadedMetaData (){
captureScreen(function(screen) {
keepStreamActive(screen);
captureAudio(function(mic) {
keepStreamActive(mic);
screen.width = window.screen.width;
screen.height = window.screen.height;
screen.fullcanvas = true;
recorder = RecordRTC([screen, mic], {
type: 'video',
mimeType: 'video/webm',
previewStream: function(s) {
document.querySelector('#preview-screen').muted = true;
setSrcObject(s, document.querySelector('#preview-screen'));
}
});
screenRec = screen;
micRec = mic;
//Start recording
recorder.startRecording();
});
addStreamStopListener(screen, function() {
btnStopRecording.click();
});
});
}
var btnStopRecording = document.getElementById('stoprecording');
btnStopRecording.onclick = function() {
this.disabled = true;
recorder.stopRecording(function() {
var blob = recorder.getBlob();
document.querySelector('#preview-screen').src = URL.createObjectURL(blob);
document.querySelector('#preview-screen').muted = false;
screenRec.getTracks().concat(micRec.getTracks()).forEach(function(track) {
track.stop();
});
});
};
function addStreamStopListener(stream, callback) {
var streamEndedEvent = 'ended';
if ('oninactive' in stream) {
streamEndedEvent = 'inactive';
}
stream.addEventListener(streamEndedEvent, function() {
callback();
callback = function() {};
}, false);
stream.getAudioTracks().forEach(function(track) {
track.addEventListener(streamEndedEvent, function() {
callback();
callback = function() {};
}, false);
});
stream.getVideoTracks().forEach(function(track) {
track.addEventListener(streamEndedEvent, function() {
callback();
callback = function() {};
}, false);
});
}
</script>
</body>
</html>
I'm trying to record audio from the microphone and then add a download link for it.
This is my current code:
<!DOCTYPE html>
<html>
<head>
<title>Weird Problem</title>
<script>
function microphone() {
if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
navigator.mediaDevices.getUserMedia({ audio: true, video: false }).then(function(stream) {
var recorder = new MediaRecorder(stream);
var chunks = [];
recorder.ondataavailable = function(e) {
chunks.push(e.data);
}
recorder.onstop = function() {
stream.getTracks()[0].stop();
var blob = new Blob(chunks, { type: "audio/ogg; codecs=opus" });
var url = window.URL.createObjectURL(blob);
document.getElementById("player").src = url;
document.getElementById("upload").href = url;
document.getElementById("upload").download = "test.ogg";
}
recorder.start();
setTimeout(function() {
recorder.stop();
}, 5000);
}).catch(function(error) {
console.log(error);
});
}
}
</script>
</head>
<body>
<input type="button" value="Record Microphone" onclick="microphone();">
<br>
<audio controls id="player"></audio>
<br>
<a id="upload">Download Microphone Recording</a>
</body>
</html>
I'm on Chrome, and for me it plays the microphone audio correctly, but when I try to download it, the file won't play or even open.
I get various errors that the file contains data in an unknown format, so I think it's something to do with the audio headers.
Any help would be highly appreciated!
It turns out this issue is a bit more complicated than I expected.
I'm now adapting code from Recorder.js.
The online demo does exactly what I need.
I'd like to record a video in the browser and then play it back for the user. I'm currently using the MediaRecorder API (only available in Firefox for now) to do this. It works fine for videos shorter than a few seconds, but otherwise the video doesn't show up at all.
There are no errors in the console or similar, and I couldn't find anything about any file size limitations in the documentation.
Here's the code I'm using (Firefox only):
index.html:
<!DOCTYPE html>
<html>
<body>
<button id="start">Start</button>
<button id="stop">Stop</button>
<video id="player" src="" width="300" height="300" autoplay></video>
<script src="script.js"></script>
</body>
</html>
script.js:
var record = document.getElementById('start');
var stop = document.getElementById('stop');
var video = document.getElementById('player');
var constraints = {video: true};
var onSuccess = function(stream) {
var mediaRecorder = new MediaRecorder(stream);
record.onclick = function() {
mediaRecorder.start();
}
stop.onclick = function() {
mediaRecorder.stop();
}
mediaRecorder.ondataavailable = function(e) {
video.src = window.URL.createObjectURL(e.data);
}
};
var onError = function(err) {
console.log('The following error occured: ' + err);
}
navigator.mozGetUserMedia(constraints, onSuccess, onError);
On Codepen: http://codepen.io/anon/pen/xGqKgE
Is this a bug/browser issue? Are there any limitations to this API that I'm not aware of?
I am trying to study web RTC. This is example where I just want to show my camera video.
<html>
<head>
<title>Web RTC</title>
</head>
<body>
<video autoplay></video>
<script type="text/javascript">
navigator.getUserMedia = navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia;
var video = document.querySelector('video');
var hdConstraints = {
video : {
mandatory: {
minWidth: 1280,
minHeight: 720
}
}
};
var errorCallback = function(e) {
console.log('Rejected!', e);
};
var successCallback = function(stream) {
video.src = window.URL.createObjectURL(stream);
}
if (navigator.getUserMedia) {
navigator.getUserMedia(hdConstraints, successCallback, errorCallback);
}
else {
console.log('sh');
}
</script>
</body>
</html>
Problem is that chrome wont let my camera start. I was clicking on camera icon and I told chrome that he should ask me next time for permission... if anyone is thinking in that direction, that is not the case. In Firefox this works just fine. That first console log is throwing an error and Google did not helped me with it. You can try i your self. Any ideas?
Thank you in advance.
after a couple of hours of struggling here I am. I have the following code, which apparently should just start my webcam and prompt the stream on the webpage:
<!doctype html>
<html>
<head>
<title>HTML5 Webcam Test</title>
</head>
<body>
<video id="sourcevid" autoplay>Put your fallback message here.</video>
<div id="errorMessage"></div>
<script>
video = document.getElementById('sourcevid');
navigator.getUserMedia = navigator.webkitGetUserMedia || navigator.getUserMedia;
window.URL = window.URL || window.webkitURL;
function gotStream(stream) {
if (window.URL) {
video.src = window.URL.createObjectURL(stream);
} else {
video.src = stream; // Opera.
}
video.onerror = function(e) {
stream.stop();
};
stream.onended = noStream;
}
function noStream(e) {
var msg = 'No camera available.';
if (e.code == 1) {
msg = 'User denied access to use camera.';
}
document.getElementById('errorMessage').textContent = msg;
}
navigator.webkitGetUserMedia({video: true}, gotStream, noStream);
</script>
</body>
</html>
No errors in the console, but no webcam stream either. Just the "User denied access to use camera.".
I tried another example, too long to show, but again apparently as soon as I try to run the page the stream falls into the .onended function:
function gotStream(stream) {
video.src = URL.createObjectURL(stream);
video.onerror = function () {
stream.stop();
};
stream.onended = noStream;
[...]
Where noStream is a simple function that prints something:
function noStream() {
document.getElementById('errorMessage').textContent = 'No camera available.';
}
So basically when I'm running the second example I'm shown the "No camera available" on the webpage.
I'm running on Chrome Version 22.0.1229.94, I saw somewhere that I needed to enable some flags, but I couldn't find them in my chrome://flags; the flags' name were Enable MediaStream and Enable PeerConnection, but in my version I only have the second one, which I enabled.
Any thoughts? Is the API I'm using old by any means? Can somebody point me to some working example?
Thanks
According to http://www.webrtc.org/running-the-demos the getUserMedia API is available on stable version as of Chrome 21 without the need to use any flag.
I think the error happens because you are trying to instantiate the stream without to define the url stream properly. Consider that you need to access the url stream differently in Chrome and Opera.
I would create the structure of your code as something like below:
function gotStream(stream) {
if (window.URL) {
video.src = window.URL.createObjectURL(stream) || stream;
video.play();
} else {
video.src = stream || stream; // Opera.
video.play();
}
video.onerror = function(e) {
stream.stop();
};
stream.onended = noStream;
}
function noStream(e) {
var msg = 'No camera available.';
if (e.code == 1) {
msg = 'User denied access to use camera.';
}
document.getElementById('errorMessage').textContent = msg;
}
var options = {video: true, toString: function(){return 'video';}};
navigator.getUserMedia(options, gotStream, noStream);
EDIT:
You need to replace the source video element with the media stream. Edited the code above.
video = document.getElementById('sourcevid');
I recommend for reading these two articles:
http://www.html5rocks.com/en/tutorials/getusermedia/intro/
http://dev.opera.com/articles/view/playing-with-html5-video-and-getusermedia-support/