How can I get a list of video cameras attached in my computer using JavaScript? - javascript

I want to display a list of video cameras attached to the user's computer, and when they select one, display streaming video from that camera in an HTML5 <video> tag.
How can I get a list of the video cameras attached to the user's computer?

Only works in chrome and edge
<script>
navigator.mediaDevices.enumerateDevices().then(function (devices) {
for(var i = 0; i < devices.length; i ++){
var device = devices[i];
if (device.kind === 'videoinput') {
var option = document.createElement('option');
option.value = device.deviceId;
option.text = device.label || 'camera ' + (i + 1);
document.querySelector('select#videoSource').appendChild(option);
}
};
});
</script>
<select id="videoSource"></select>

Perhaps Navigator.getUserMedia() (uses WebRTC under the hood) is what you're looking for, though I don't see anything that will directly tell you what devices are available (the list of devices isn't exposed to your codeā€”it's presented to the user when asking for permission to access available hardware).
Also note the browser support: Chrome 21+, Firefox 20+, Opera 12+, no support for IE and possibly Safari.

try out this...
<!DOCTYPE html>
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="author" content="Victor Stan">
<meta name="description" content="Get multiple video streams on one page. Adapted from code by Muaz Khan">
<title>Video Camera</title>
<script src="//ajax.googleapis.com/ajax/libs/jquery/2.0.3/jquery.min.js" ></script>
<style type="text/css" media="screen">
video {
border:1px solid gray;
}
</style>
</head>
<body>
<script>
if (!MediaStreamTrack) document.body.innerHTML = '<h1>Incompatible Browser Detected. Try <strong style="color:red;">Chrome Canary</strong> instead.</h1>';
var videoSources = [];
MediaStreamTrack.getSources(function(media_sources) {
console.log(media_sources);
alert('media_sources : '+media_sources);
media_sources.forEach(function(media_source){
if (media_source.kind === 'video') {
videoSources.push(media_source);
}
});
getMediaSource(videoSources);
});
var get_and_show_media = function(id) {
var constraints = {};
constraints.video = {
optional: [{ sourceId: id}]
};
navigator.webkitGetUserMedia(constraints, function(stream) {
console.log('webkitGetUserMedia');
console.log(constraints);
console.log(stream);
var mediaElement = document.createElement('video');
mediaElement.src = window.URL.createObjectURL(stream);
document.body.appendChild(mediaElement);
mediaElement.controls = true;
mediaElement.play();
}, function (e)
{
alert('Hii');
document.body.appendChild(document.createElement('hr'));
var strong = document.createElement('strong');
strong.innerHTML = JSON.stringify(e);
alert('strong.innerHTML : '+strong.innerHTML);
document.body.appendChild(strong);
});
};
var getMediaSource = function(media) {
console.log(media);
media.forEach(function(media_source) {
if (!media_source) return;
if (media_source.kind === 'video')
{
// add buttons for each media item
var button = $('<input/>', {id: media_source.id, value:media_source.id, type:'submit'});
$("body").append(button);
// show video on click
$(document).on("click", "#"+media_source.id, function(e){
console.log(e);
console.log(media_source.id);
get_and_show_media(media_source.id);
});
}
});
}
</script>
</body>
</html>

JavaScript cannot access your cameras to return a list. You will need to use a Flash SWF to get the camera information and pass it back to your page's JavaScript.
EDIT:
to those who downvoted. These methods will not give him a dropdown list of available cameras. If it does, please post a link or code. At the current date, the only way to get a list of cameras (which is what his questions was) is to use Flash (or possibly silverlight, but Flash has much broader install coverage). I've edited my question to be a little more specific in terms of getting the list versus accessing a camera.

Related

Javascript code runs fine on Codesandbox but not on localy or on a webserver

I tried to read read QR code thanks to javascript code found in this tutorial
The code provided by this tutorial works inside the codesandbox linked in the tutorial, however it doesn't work when I tired the same exact code on my laptop or on my remote webserver. I've litteraly copy and paste the code with the same file configuration, filenames ect... but I'm getting the following JS error on my browser :
SyntaxError: Identifier 'qrcode' has already been declared (at qrCodeScanner.js:1:1)
Since I run the exact same code I d'ont understand what is going on there. Is there something needed on the server side in order to make the code works that is not mentioned in the tutorial ?
If you want to see the code used and see it in action, you can teste the codesandbox instance there.
EDIT
Here's the code I use :
(HMTL)
<!DOCTYPE html>
<html>
<head>
<title>QR Code Scanner</title>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width; initial-scale=1.0, maximum-scale=1.0; user-scalable=0;" />
<link rel="stylesheet" href="./src/style.css" />
<script src="https://rawgit.com/sitepoint-editors/jsqrcode/master/src/qr_packed.js"></script>
</head>
<body>
<div id="container">
<h1>QR Code Scanner</h1>
<a id="btn-scan-qr">
<img src="https://dab1nmslvvntp.cloudfront.net/wp-content/uploads/2017/07/1499401426qr_icon.svg">
<a/>
<canvas hidden="" id="qr-canvas"></canvas>
<div id="qr-result" hidden="">
<b>Data:</b> <span id="outputData"></span>
</div>
</div>
<script src="./src/qrCodeScanner.js"></script>
</body>
</html>
(Javascript)
const qrcode = window.qrcode;
const video = document.createElement("video");
const canvasElement = document.getElementById("qr-canvas");
const canvas = canvasElement.getContext("2d");
const qrResult = document.getElementById("qr-result");
const outputData = document.getElementById("outputData");
const btnScanQR = document.getElementById("btn-scan-qr");
let scanning = false;
qrcode.callback = res => {
if (res) {
outputData.innerText = res;
scanning = false;
video.srcObject.getTracks().forEach(track => {
track.stop();
});
qrResult.hidden = false;
canvasElement.hidden = true;
btnScanQR.hidden = false;
}
};
btnScanQR.onclick = () => {
navigator.mediaDevices
.getUserMedia({ video: { facingMode: "environment" } })
.then(function(stream) {
scanning = true;
qrResult.hidden = true;
btnScanQR.hidden = true;
canvasElement.hidden = false;
video.setAttribute("playsinline", true); // required to tell iOS safari we don't want fullscreen
video.srcObject = stream;
video.play();
tick();
scan();
});
};
function tick() {
canvasElement.height = video.videoHeight;
canvasElement.width = video.videoWidth;
canvas.drawImage(video, 0, 0, canvasElement.width, canvasElement.height);
scanning && requestAnimationFrame(tick);
}
function scan() {
try {
qrcode.decode();
} catch (e) {
setTimeout(scan, 300);
}
}
Problem
The problem is that you are probably using a live server or just opening the html file, but in the sandbox parcel-bundler is used. var qrcode from the library collides with your const qrcode.
Solutions
Type module
Replace
<script src="./src/qrCodeScanner.js"></script>
with
<script type="module" src="./src/qrCodeScanner.js"></script>
Rename
Change your variable to something else like const myQrcode
Use a bundler
You can use parcel-bundler as in the sandbox or any other that will resolve variable collision for you

How can I upload a file to a github repo in Javascript?

I have an audio file which is generated by a JS script integrated to my streamlit web-app with components.html, like this:
components.html(
"""
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
</head>
<body>
<!-- Set up your HTML here -->
<center>
<p><button id="record">Record</button></p>
<div id="sound-clip"></div>
</center>
<script src="https://code.jquery.com/jquery-3.3.1.min.js"
integrity="sha256-FgpCb/KJQlLNfOu91ta32o/NMZxltwRo8QtmkMRdAu8=" crossorigin="anonymous"></script>
<script>
// Set up the AudioContext.
const audioCtx = new AudioContext();
// Top-level variable keeps track of whether we are recording or not.
let recording = false;
// Ask user for access to the microphone.
if (navigator.mediaDevices) {
navigator.mediaDevices.getUserMedia({ "audio": true }).then((stream) => {
// Instantiate the media recorder.
const mediaRecorder = new MediaRecorder(stream);
// Create a buffer to store the incoming data.
let chunks = [];
mediaRecorder.ondataavailable = (event) => {
chunks.push(event.data);
}
// When you stop the recorder, create a empty audio clip.
mediaRecorder.onstop = (event) => {
const audio = new Audio();
audio.setAttribute("controls", "");
$("#sound-clip").append(audio);
$("#sound-clip").append("<br />");
// Combine the audio chunks into a blob, then point the empty audio clip to that blob.
const blob = new Blob(chunks, { "type": "audio/wav; codecs=0" });
audio.src = window.URL.createObjectURL(blob);
// Clear the `chunks` buffer so that you can record again.
chunks = [];
};
mediaRecorder.start();
recording = true;
$("#record").html("Stop");
// Set up event handler for the "Record" button.
$("#record").on("click", () => {
if (recording) {
mediaRecorder.stop();
recording = false;
$("#record").html("Record");
} else {
$("#record").html("Stop");
}
});
}).catch((err) => {
// Throw alert when the browser is unable to access the microphone.
alert("Oh no! Your browser cannot access your computer's microphone.");
});
} else {
// Throw alert when the browser cannot access any media devices.
alert("Oh no! Your browser cannot access your computer's microphone. Please update your browser.");
}
</script>
</body>
</html>
"""
)
Since I'm using Streamlit I need to upload the generated file to a bucket (I was thinking to use a simple github repo for now), but I have problems understanding how to do it given the fact that the script is wrapped inside components.html. Is it possible to upload the file and later retrive it to use it inside my python script for some calculations?

Controlling MP4 Playback using javascript

I am trying to clone the following site:
https://www.apple.com/mac-pro/
I am still in the prototyping phase and the big ticket item I am trying to figure out is how they are playing their MP4 file backwards when you scroll up the page. If you scroll down the page a few steps and then back up, you will see what I mean.
So far I have tried the following techniques:
Tweening currentTime property of video element
Using requestAnimationFrame and using the timestamp in the callback to update the currentTime property to the desired value
Using the requestAnimationFrame technique, I am now getting a partially usable result in every browser other than Chrome. Chrome is ok if you want to rewind maybe .5 seconds, but any more than that and it will get jumpy.
I have also made the following discoveries:
Chrome hates trying to rewind an MP4 file
As much as Chrome hates rewinding MP4 files, also make sure that you don't have an audio track on your video file. It will make it even slower.
So I feel I have a pretty good understanding of the options available to me, but the one thing that makes me think I am missing something is that the Apple website functions ok in Chrome.
I have started debugging their code which is located at:
https://images.apple.com/v/mac-pro/home/b/scripts/overview.js
And from what I can tell they seem to be using requestAnimationFrame, but I can't understand why they are getting a better result.. Does anyone have any ideas on how to achieve this effect?
BTW - I understand that videos are not really meant to be played backwards and they will never play predictably backwards. I have even had occasions on the Apple website where the rewinding can be jerky. But they still have good 2-3 second rewind transitions and the result is definitely acceptable.
Here is my relevant javascript and HTML so far..
var envyVideo, currentVideoTrigger = 0,
currentIndicator, startTime, vid, playTimestamp, playTo, playAmount, triggeredTime, rewindInterval;
$(function() {
vid = document.getElementById("envy-video");
$("#play-button").click(function() {
vid.play();
});
$("#rewind-button").click(function() {
vid.pause();
playTo = parseFloat($("#play-to-time").val());
playAmount = playTo - vid.currentTime;
triggeredTime = vid.currentTime;
requestAnimationFrame(rewindToPointInTime);
});
});
function rewindToPointInTime(timestamp) {
if (!playTimestamp) playTimestamp = timestamp;
var timeDifference = (timestamp - playTimestamp) / 1000;
vid.currentTime = triggeredTime + (playAmount * (timeDifference / Math.abs(playAmount)));
if (vid.currentTime > playTo) {
requestAnimationFrame(rewindToPointInTime);
} else {
playTimestamp = null;
playAmount = null;
}
}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Rhino Envy</title>
<script src="https://code.jquery.com/jquery-3.2.1.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>
<script src="./js/envy.js"></script>
<link rel="stylesheet" href="./css/envy.css">
</head>
<body;
<div id="envy-video-container">
<video id="envy-video" src="./videos/prototype_animation.mp4"></video>
</div>
<div id="video-controls">
<p id="video-current-time"></p>
<div class="video-control"><button id="rewind-button">rewind to</button><input type="text" id="play-to-time" placeholder="forward time" value="0"></div>
<button id="play-button">play</button>
</div>
<ul id="envy-steps">
<li id="envy-step-indicator-1"></li>
<li id="envy-step-indicator-2"></li>
<li id="envy-step-indicator-3"></li>
</ul>
<section id="envy-full-range">
<div id="envy-1-door-link"></div>
<div id="envy-2-door-link"></div>
<div id="envy-3-door-link"></div>
</section>
</body>
</html>
One solid way I can think of, would be to use two videos : one in normal direction, and the other one reversed.
You could then simply switch between which video is to be played, and only update the currentTime of the hidden one in the background.
With this solution, you can even rewind audio !
To reverse a video, you can use ffmpeg's command
ffmpeg -i input.mp4 -vf reverse -af areverse reversed.mp4
Note: You may feel some gaps at the switch, which could probably be leveraged by using a single visible video element, fed from the other's element streams, but I'll leave it for an update, I'm short on time r.n.
const vids = document.querySelectorAll('video');
vids.forEach(v => {
v.addEventListener('loadedmetadata', canplay);
v.addEventListener('timeupdate', timeupdate);
});
let visible = 0;
function timeupdate(evt) {
if (this !== vids[visible]) return;
let other = (visible + 1) % 2;
vids[other].currentTime = this.duration - this.currentTime;
}
document.getElementById('switch').onclick = e => {
visible = (visible + 1) % 2;
show(vids[visible]);
}
// waith both vids have loaded a bit
let loaded = 0;
function canplay() {
if (++loaded < vids.length) return;
hide(vids[1]);
show(vids[0]);
}
function show(el) {
el.muted = false;
const p = el.play();
el.style.display = 'block';
const other = vids[(visible + 1) % 2];
// only newest chrome and FF...
if (p && p.then) {
p.then(_ => hide(other));
} else {
hide(other);
}
}
function hide(el) {
el.muted = true;
el.pause();
el.style.display = 'none';
}
document.getElementById('pause').onclick = function(e) {
if (vids[visible].paused) {
this.textContent = 'pause';
vids[visible].play();
} else {
this.textContent = 'play';
vids[visible].pause();
}
}
<button id="switch">switch playing direction</button>
<button id="pause">pause</button>
<div id="vidcontainer">
<video id="normal" src="https://dl.dropboxusercontent.com/s/m2htty4a8a9fel1/tst.mp4?dl=0" loop="true"></video>
<video id="reversed" src="https://dl.dropboxusercontent.com/s/lz85k8tftj2j8x6/tst-reversed.mp4?dl=0" loop="true"></video>
</div>

playing multiple audio files

I am trying to create a playlist with the code below but I seem to be getting some errors. Firebug is saying that the play() is not a function. please help I have spent half of my day trying to find a solution. Here is my code:
<head>
<title></title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<script>
var current, playlist;
current = 0;
function() {
current++;
var songs = Array("en_ra1.mp3", "en_ra2.mp3", "en_ra3.mp3", "en_ra0.mp3");
playlist = songs.length;
if (current == playlist) {
//do nothing or stop
} else {
this.playOptions = document.getElementById("audio").src = songs[current];
this.playOptions.play();
}
}
</script>
</head>
<body>
<audio id="audio" onended="loadplaylist()" src="ny_option1.mp3" controls ></audio>
Note:when I include the autoplay attribute it's works just fine despite the error showing in firebug console.
I'm not seeing where you declare the loadplaylist function, presumably a typo
in your function you are setting this.playOptions to the string returned from the array, not the player, I think your function should read something like this:
function loadplaylist() {
current++;
var songs = Array("en_ra1.mp3", "en_ra2.mp3", "en_ra3.mp3", "en_ra0.mp3");
playlist = songs.length;
if (current == playlist) {
//do nothing or stop
} else {
this.playOptions = document.getElementById("audio");
this.playOptions.src = songs[current];
this.playOptions.play();
}
}

How to record webcam and audio using webRTC and a server-based Peer connection

I would like to record the users webcam and audio and save it to a file on the server. These files would then be able to be served up to other users.
I have no problems with playback, however I'm having problems getting the content to record.
My understanding is that the getUserMedia .record() function has not yet been written - only a proposal has been made for it so far.
I would like to create a peer connection on my server using the PeerConnectionAPI. I understand this is a bit hacky, but I'm thinking it should be possible to create a peer on the server and record what the client-peer sends.
If this is possible, I should then be able to save this data to flv or any other video format.
My preference is actually to record the webcam + audio client-side, to allow the client to re-record videos if they didn't like their first attempt before uploading. This would also allow for interruptions in network connections. I've seen some code which allows recording of individual 'images' from the webcam by sending the data to the canvas - that's cool, but I need the audio too.
Here's the client side code I have so far:
<video autoplay></video>
<script language="javascript" type="text/javascript">
function onVideoFail(e) {
console.log('webcam fail!', e);
};
function hasGetUserMedia() {
// Note: Opera is unprefixed.
return !!(navigator.getUserMedia || navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia || navigator.msGetUserMedia);
}
if (hasGetUserMedia()) {
// Good to go!
} else {
alert('getUserMedia() is not supported in your browser');
}
window.URL = window.URL || window.webkitURL;
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia || navigator.msGetUserMedia;
var video = document.querySelector('video');
var streamRecorder;
var webcamstream;
if (navigator.getUserMedia) {
navigator.getUserMedia({audio: true, video: true}, function(stream) {
video.src = window.URL.createObjectURL(stream);
webcamstream = stream;
// streamrecorder = webcamstream.record();
}, onVideoFail);
} else {
alert ('failed');
}
function startRecording() {
streamRecorder = webcamstream.record();
setTimeout(stopRecording, 10000);
}
function stopRecording() {
streamRecorder.getRecordedData(postVideoToServer);
}
function postVideoToServer(videoblob) {
/* var x = new XMLHttpRequest();
x.open('POST', 'uploadMessage');
x.send(videoblob);
*/
var data = {};
data.video = videoblob;
data.metadata = 'test metadata';
data.action = "upload_video";
jQuery.post("http://www.foundthru.co.uk/uploadvideo.php", data, onUploadSuccess);
}
function onUploadSuccess() {
alert ('video uploaded');
}
</script>
<div id="webcamcontrols">
<a class="recordbutton" href="javascript:startRecording();">RECORD</a>
</div>
You should definitely have a look at Kurento. It provides a WebRTC server infrastructure that allows you to record from a WebRTC feed and much more. You can also find some examples for the application you are planning here. It is really easy to add recording capabilities to that demo, and store the media file in a URI (local disk or wherever).
The project is licensed under LGPL Apache 2.0
EDIT 1
Since this post, we've added a new tutorial that shows how to add the recorder in a couple of scenarios
kurento-hello-world-recording: simple recording tutorial, showing the different capabilities of the recording endpoint.
kurento-one2one-recording: How to record a one-to-one communication in the media server.
kurento-hello-world-repository: use an external repository to record the file.
Disclaimer: I'm part of the team that develops Kurento.
I believe using kurento or other MCUs just for recording videos would be bit of overkill, especially considering the fact that Chrome has MediaRecorder API support from v47 and Firefox since v25. So at this junction, you might not even need an external js library to do the job, try this demo I made to record video/ audio using MediaRecorder:
Demo - would work in chrome and firefox (intentionally left out pushing blob to server code)
Github Code Source
If running firefox, you could test it in here itself( chrome needs https):
'use strict'
let log = console.log.bind(console),
id = val => document.getElementById(val),
ul = id('ul'),
gUMbtn = id('gUMbtn'),
start = id('start'),
stop = id('stop'),
stream,
recorder,
counter = 1,
chunks,
media;
gUMbtn.onclick = e => {
let mv = id('mediaVideo'),
mediaOptions = {
video: {
tag: 'video',
type: 'video/webm',
ext: '.mp4',
gUM: {
video: true,
audio: true
}
},
audio: {
tag: 'audio',
type: 'audio/ogg',
ext: '.ogg',
gUM: {
audio: true
}
}
};
media = mv.checked ? mediaOptions.video : mediaOptions.audio;
navigator.mediaDevices.getUserMedia(media.gUM).then(_stream => {
stream = _stream;
id('gUMArea').style.display = 'none';
id('btns').style.display = 'inherit';
start.removeAttribute('disabled');
recorder = new MediaRecorder(stream);
recorder.ondataavailable = e => {
chunks.push(e.data);
if (recorder.state == 'inactive') makeLink();
};
log('got media successfully');
}).catch(log);
}
start.onclick = e => {
start.disabled = true;
stop.removeAttribute('disabled');
chunks = [];
recorder.start();
}
stop.onclick = e => {
stop.disabled = true;
recorder.stop();
start.removeAttribute('disabled');
}
function makeLink() {
let blob = new Blob(chunks, {
type: media.type
}),
url = URL.createObjectURL(blob),
li = document.createElement('li'),
mt = document.createElement(media.tag),
hf = document.createElement('a');
mt.controls = true;
mt.src = url;
hf.href = url;
hf.download = `${counter++}${media.ext}`;
hf.innerHTML = `donwload ${hf.download}`;
li.appendChild(mt);
li.appendChild(hf);
ul.appendChild(li);
}
button {
margin: 10px 5px;
}
li {
margin: 10px;
}
body {
width: 90%;
max-width: 960px;
margin: 0px auto;
}
#btns {
display: none;
}
h1 {
margin-bottom: 100px;
}
<link type="text/css" rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css">
<h1> MediaRecorder API example</h1>
<p>For now it is supported only in Firefox(v25+) and Chrome(v47+)</p>
<div id='gUMArea'>
<div>
Record:
<input type="radio" name="media" value="video" checked id='mediaVideo'>Video
<input type="radio" name="media" value="audio">audio
</div>
<button class="btn btn-default" id='gUMbtn'>Request Stream</button>
</div>
<div id='btns'>
<button class="btn btn-default" id='start'>Start</button>
<button class="btn btn-default" id='stop'>Stop</button>
</div>
<div>
<ul class="list-unstyled" id='ul'></ul>
</div>
<script src="https://code.jquery.com/jquery-2.2.0.min.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/js/bootstrap.min.js"></script>
Please, check the RecordRTC
RecordRTC is MIT licensed on github.
yes, as you understood, MediaStreamRecorder is currently unimplemented.
MediaStreamRecorder is a WebRTC API for recording getUserMedia() streams . It allows web apps to create a file from a live audio/video session.
alternatively you may do like this http://ericbidelman.tumblr.com/post/31486670538/creating-webm-video-from-getusermedia but audio is missing part.
You can use RecordRTC-together, which is based on RecordRTC.
It supports recording video and audio together in separate files. You will need tool like ffmpeg to merge two files into one on server.
Web Call Server 4 can record WebRTC audio and video to WebM container.
The recording is done using Vorbis codec for audio and VP8 codec for video.
Iniitial WebRTC codecs are Opus or G.711 and VP8. So, the server-side recording requires either Opus/G.711 to Vorbis server-side transcoding or VP8-H.264 transcoding if it is necessary to use another container, i.e. AVI.
For the record I also don't have enough knowledge about this,
But I found this on Git hub-
<!DOCTYPE html>
<html>
<head>
<title>XSockets.WebRTC Client example</title>
<meta charset="utf-8" />
<style>
body {
}
.localvideo {
position: absolute;
right: 10px;
top: 10px;
}
.localvideo video {
max-width: 240px;
width:100%;
margin-right:auto;
margin-left:auto;
border: 2px solid #333;
}
.remotevideos {
height:120px;
background:#dadada;
padding:10px;
}
.remotevideos video{
max-height:120px;
float:left;
}
</style>
</head>
<body>
<h1>XSockets.WebRTC Client example </h1>
<div class="localvideo">
<video autoplay></video>
</div>
<h2>Remote videos</h2>
<div class="remotevideos">
</div>
<h2>Recordings ( Click on your camera stream to start record)</h2>
<ul></ul>
<h2>Trace</h2>
<div id="immediate"></div>
<script src="XSockets.latest.js"></script>
<script src="adapter.js"></script>
<script src="bobBinder.js"></script>
<script src="xsocketWebRTC.js"></script>
<script>
var $ = function (selector, el) {
if (!el) el = document;
return el.querySelector(selector);
}
var trace = function (what, obj) {
var pre = document.createElement("pre");
pre.textContent = JSON.stringify(what) + " - " + JSON.stringify(obj || "");
$("#immediate").appendChild(pre);
};
var main = (function () {
var broker;
var rtc;
trace("Ready");
trace("Try connect the connectionBroker");
var ws = new XSockets.WebSocket("wss://rtcplaygrouund.azurewebsites.net:443", ["connectionbroker"], {
ctx: '23fbc61c-541a-4c0d-b46e-1a1f6473720a'
});
var onError = function (err) {
trace("error", arguments);
};
var recordMediaStream = function (stream) {
if ("MediaRecorder" in window === false) {
trace("Recorder not started MediaRecorder not available in this browser. ");
return;
}
var recorder = new XSockets.MediaRecorder(stream);
recorder.start();
trace("Recorder started.. ");
recorder.oncompleted = function (blob, blobUrl) {
trace("Recorder completed.. ");
var li = document.createElement("li");
var download = document.createElement("a");
download.textContent = new Date();
download.setAttribute("download", XSockets.Utils.randomString(8) + ".webm");
download.setAttribute("href", blobUrl);
li.appendChild(download);
$("ul").appendChild(li);
};
};
var addRemoteVideo = function (peerId, mediaStream) {
var remoteVideo = document.createElement("video");
remoteVideo.setAttribute("autoplay", "autoplay");
remoteVideo.setAttribute("rel", peerId);
attachMediaStream(remoteVideo, mediaStream);
$(".remotevideos").appendChild(remoteVideo);
};
var onConnectionLost = function (remotePeer) {
trace("onconnectionlost", arguments);
var peerId = remotePeer.PeerId;
var videoToRemove = $("video[rel='" + peerId + "']");
$(".remotevideos").removeChild(videoToRemove);
};
var oncConnectionCreated = function () {
console.log(arguments, rtc);
trace("oncconnectioncreated", arguments);
};
var onGetUerMedia = function (stream) {
trace("Successfully got some userMedia , hopefully a goat will appear..");
rtc.connectToContext(); // connect to the current context?
};
var onRemoteStream = function (remotePeer) {
addRemoteVideo(remotePeer.PeerId, remotePeer.stream);
trace("Opps, we got a remote stream. lets see if its a goat..");
};
var onLocalStream = function (mediaStream) {
trace("Got a localStream", mediaStream.id);
attachMediaStream($(".localvideo video "), mediaStream);
// if user click, video , call the recorder
$(".localvideo video ").addEventListener("click", function () {
recordMediaStream(rtc.getLocalStreams()[0]);
});
};
var onContextCreated = function (ctx) {
trace("RTC object created, and a context is created - ", ctx);
rtc.getUserMedia(rtc.userMediaConstraints.hd(false), onGetUerMedia, onError);
};
var onOpen = function () {
trace("Connected to the brokerController - 'connectionBroker'");
rtc = new XSockets.WebRTC(this);
rtc.onlocalstream = onLocalStream;
rtc.oncontextcreated = onContextCreated;
rtc.onconnectioncreated = oncConnectionCreated;
rtc.onconnectionlost = onConnectionLost;
rtc.onremotestream = onRemoteStream;
rtc.onanswer = function (event) {
};
rtc.onoffer = function (event) {
};
};
var onConnected = function () {
trace("connection to the 'broker' server is established");
trace("Try get the broker controller form server..");
broker = ws.controller("connectionbroker");
broker.onopen = onOpen;
};
ws.onconnected = onConnected;
});
document.addEventListener("DOMContentLoaded", main);
</script>
On Line number 89 in my case code OnrecordComplete actually append a link of recorder file, if you will click on that link it will start the download, you can save that path to your server as a file.
The Recording code looks something like this
recorder.oncompleted = function (blob, blobUrl) {
trace("Recorder completed.. ");
var li = document.createElement("li");
var download = document.createElement("a");
download.textContent = new Date();
download.setAttribute("download", XSockets.Utils.randomString(8) + ".webm");
download.setAttribute("href", blobUrl);
li.appendChild(download);
$("ul").appendChild(li);
};
The blobUrl holds the path. I solved my problem with this, hope someone will find this useful
Currently the browsers support recording on the client side.
https://webrtc.github.io/samples/
One can push the recorded file after the connection has been ended to server by uploading through some HTTP requests.
https://webrtc.github.io/samples/src/content/getusermedia/record/
https://github.com/webrtc/samples/tree/gh-pages/src/content/getusermedia/record
This has some kind of drawbacks that in case if the user just closes the tab and don't run these operations in the backend side, it may not uploaded the files fully to the server.
As a more stable solution Ant Media Server can record the stream on server side and recording functionality is one of the basic feature of Ant Media Server.
antmedia.io
Note: I'm a member of Ant Media team.
Technically you can use FFMPEG on backend to mix video and audio

Categories