Ok, I'm going to try and make my question as clear as possible, but I'm pretty confused, so let me know if I'm not getting the message across.
I'm trying to use getUserMedia to use the webcam, and then use this
http://www.w3.org/TR/mediastream-recording/
to record a brief captured video. Problem is, when I try to define new MediaRecorder(stream), I'm told that it is undefined. I haven't used this api before, so I don't really know what I'm missing. Here is the relevant code:
function onVideoFail(e) {
console.log('webcam fail!', e);
};
function hasGetUserMedia() {
return !!(navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia);
}
if (hasGetUserMedia()) {
window.URL = window.URL || window.webkitURL;
navigator.getUserMedia = navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia;
if (navigator.getUserMedia) {
navigator.getUserMedia({video: true, audio: false}, function(stream){
var video = document.querySelector('video');
var recorder = new MediaRecorder(stream); <<<<<< THIS IS MY PROBLEM SPOT
video.src = window.URL.createObjectURL(stream);
video.play();
// webcamstream = stream;
// streamrecorder = webcamstream.record();
}, onVideoFail);
} else {
alert('failed');
}
} else {
alert('getUserMedia() is not supported by this browser!!');
}
I've been trying to look at this for reference:
HTML5 getUserMedia record webcam, both audio and video
MediaStream Recording (or Media Recorder API after the MediaRecorder JS object it defines) has now been implemented in 2 major browsers on desktop:
Firefox 30 audio + video
Chrome 47, 48 only for video with experimental Web Platform on in chrome://flags.
Chrome 49 audio + video
Containers:
Both record to .webm containers.
Video codecs:
Both record VP8 video
Chrome 49+ can record VP9 video
Chrome 52+ can record H.264 video
Audio codecs:
Firefox records Vorbis audio # 44.1 kHz
Chrome 49 records Opus audio # 48 kHz
Demos/GitLab:
https://simpl.info/mediarecorder/
https://addpipe.com/media-recorder-api-demo/
Make sure you run these demos on HTTPS or localhost:
Starting with Chrome 47, getUserMedia() requests are only allowed from secure origins: HTTPS or localhost.
Further reading:
MediaRecorder spec on w3c
HTML5’s Media Recorder API in Action on Chrome and Firefox on addpipe.com
MediaRecorder API on mozilla.org
Chrome 47 WebRTC: media recording, secure origins on developers.google.com
MediaRecorder API Available in Chrome 49 on developers.google.com
Disclaimer: I work at Pipe where we handle video recording.
I am currently using this API, and I've found our it is currently only implemented on the Nightly version of firefox, and it can only record audio.
It isn't implemented on Chrome (to my knowledge).
Here is how I use it, if it can help:
function record(length,stream) {
var recorder = new window.MediaRecorder(stream);
recorder.ondataavailable = function(event) {
if (recorder.state == 'recording') {
var blob = new window.Blob([event.data], {
type: 'audio/ogg'
});
// use the created blob
recorder.stop();
}
};
recorder.onstop = function() {
recorder = null;
};
recorder.start(length);
}
I put a MediaStream Recording demo at simpl.info/mediarecorder.
This currently works with Firefox Nightly and, as #Apzx says, it's audio only. There is an Intent to Implement for Chrome.
As of version 49, Chrome now has support for the MediaRecorder API. You may now record MediaStream objects. Unfortunately, if you're building a product that must record MediaStreams for browsers older than literally the latest version of Chrome (at least as of this writing), then you will need to make use of a WebRTC Media Server/Gateway.
The basic idea is that a peer connection is instantiated on a server and that your local peer attaches a video and/or audio stream to its connection object to be sent to the server peer. The server then listens to the incoming WebRTC stream and records it to a file. A couple of media servers you may want to check out:
Kurento
Janus
I personally am using Kurento and recording streams with it with great success.
In order to get a media server to work, you will need to spin up your own app server that handles signaling and handling of ICE Candidates. This is pretty simple, and Kurento has some pretty good examples with Node and Java.
If you are targeting a general audience and are using a media server, you will also probably need a STUN or TURN server. These servers essentially use the network topology to get your media server's public IP and your client's public IP. Beware that, if either end (the media server or client) lies behind a symmetric NAT, the STUN server will not be enough and you will need to use a TURN server (a free one can be found here). Without too much detail, a good thing to remember is that a STUN server acts as a signaling gateway where as a TURN server is a relay gateway. What that means is that the media streams will literally pass through a TURN server, whereas the media streams will pass from RTC peer connection to the other connection directly.
Hopefully this was helpful. If you really need RTC recording capabilities, then you're going to be going down a long road, so make sure it's worth it.
See also RecordRTC, which has workarounds for Chrome to roughly emulate the capability of MediaStream Recording. Firefox documentation is here
Related
I'm currently spiking out a music application with HTML5/JS and am attempting to achieve the lowest latency I can with the MediaStream Recording API. The app allows a user to record music with a camera and microphone. While the camera and microphone are on, the code will allow the user to hear and see themselves.
At the moment I have:
const stream = await navigator.mediaDevices.getUserMedia(
{
video: true,
audio: {
latency: {exact: 0.003},
}
}
);
// monitor video and audio (i.e. show it to the user)
this.video.srcObject = stream;
this.video.play();
If I go any lower on the latency requirement, I get an OverConstrained error. The latency is okay (better than the default) but still not great for the purposes of hearing yourself while you're recording. There is a slight, perceptible lag from when you strum a guitar and hear it in your headphones.
Are there other optimizations here I can make to achieve better results? I don't care about the quality of the video and audio as much, so maybe lowering resolution, sample rates, etc. could help here?
latency of 0.003 is a very, very low latency (3ms) and not noticeable by human beings ear.
Said that, the latency cannot be 0, when we talk of digital audio.
Although you set a very low value, it is not guaranteed that the latency actually match for various reason, in case the system can't match the latency the promise will be rejected.
As you can read here in docs:
Constraints which are specified using any or all of max, min, or exact
are always treated as mandatory. If any constraint which uses one or
more of those can't be met when calling applyConstraints(), the
promise will be rejected.
Notice: different browsers and differents OS behave differently.
Chrome
Chrome, in some canary build introduced a low latency feature called
Live Web Audio Input:
// success callback when requesting audio input stream
function gotStream(stream) {
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var audioContext = new AudioContext();
// Create an AudioNode from the stream.
var mediaStreamSource = audioContext.createMediaStreamSource( stream );
// Connect it to the destination to hear yourself (or any other node for processing!)
mediaStreamSource.connect( audioContext.destination );
}
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia;
navigator.getUserMedia( {audio:true}, gotStream );
Here you can see in action some demos taking advantage of that feature:
Live vocoder
Live input visualizer
Pitch detector
We've been working on a JavaScript-based audio chat client that runs in the browser and sends audio samples to a server via a WebSocket. We previously tried using the Web Audio API's ScriptProcessorNode to obtain the sample values. This worked well on our desktops and laptops, but we experienced poor audio quality when transmitting from a handheld platform we must support. We've attributed this to the documented script processor performance issues (https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API). On the handheld, with a script processor buffer size of 2048, audio consistently had breakups. At the next highest size interval (4096), the audio was smooth (no breakups), but there was too much latency (around two seconds).
Our results from ScriptProcessorNode prompted experimentation with Audio Worklet. Unfortunately, with our worklet implementation, audio quality is worse: both breakups and latency, even on our laptops. I'm wondering if there's a way to tweak our worklet implementation to get better performance, or if what we're experiencing is to be expected from (is "par for the course" for) the current state of audio worklets (Chromium issues 796330, 813825, and 836306 seem relevant).
Here's a little more detail on what the code does:
Create a MediaStreamStreamSourceNode with the MediaStream obtained from getUserMedia.
Connect the source node to our worklet node implementation (extends AudioWorkletNode).
Our worklet processor implementation (extends AudioWorkletProcessor) buffers blocks that arrive as the "input" argument to its process method.
When buffer is full, use MessagePort to send the buffer contents to the worklet node.
Worklet node transmits the buffer contents over a WebSocket connection.
The process method is below. The var "samples" is a Float32Array, which gets initialized to the buffer size and reused. I've experimented with buffer size a bit, but it doesn't seem to have an impact. The approach is based on the guidance in section 4.1 of AudioWorklet: The future of web audio to minimize memory allocations.
if (micKeyed == true) {
if (inputs[0][0].length == framesPerBlock) {
samples.set(inputs[0][0], currentBlockIndex * framesPerBlock);
currentBlockIndex++;
if (currentBlockIndex == lastBlockIndex) {
// console.log('About to send buffer.');
this.port.postMessage(samples);
currentBlockIndex = 0;
}
} else {
console.error("Got a block of unexpected length!!!");
}
}
return true;
Currently testing with PCs running Chrome 72.0.3626.109 on CentOS 7. Our handhelds are Panasonic FZ-N1 running Chrome 72.0.3626.105 on Android 6.0.1.
Thank you for reading and any suggestions you may be able to provide.
I want to send audio signal coming from my audio interface (focusrite saffire) to my nodejs server. How should I go about doing this? The easiest way would be to access the audio interface from the browser (html5) like capturing microphone output with getUserMedia (https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia), but couldn't find a way to access my audio interface through that library. Otherwise, I'm planning on creating a desktop application, but don't know if there is a function/library to allow access to my usb-connected audio interface.
This probably has little to do with the Javascript or the MediaDevices API. On Linux, using Firefox, PulseAudio is required to interface with your audio hardware. Since your soundcard is an Input/Output interface, you should be able to test it pretty easily by simply playing any sound file in the browser.
Most of PulseAudio configuration can be achieved using pavucontrol GUI. You should check the "configuration" tab and both "input device" & "output device" tabs to make sure your Focusrite is correctly set up and used as sound I/O.
Once this is done, you should be able to access an audio stream using the following (only available if on a "secure context", ie localhost or served through HTTPS, as stated in the MDN page you mentionned):
navigator.mediaDevices.getUserMedia({ audio: true })
.then(function(stream) {
// do whatever you want with this audio stream
})
(code snippet taken from the MDN page about MediaDevices)
Sending audio to the server is a whole other story. Have you tried anything? Are you looking for a real-time communication with the server? If so, I'd start by having a look at the WebSockets API. The WebRTC docs might be worth reading too, but it is more oriented to client-to-client communications.
How to use exact device id
Use media device constrain to pass exact device id.
An sample would be
const preferedDeviceName = '<your_device_name>'
const deviceList = await navigator.mediaDevices.enumerateDevices()
const audio = devices.find((device) => device.kind === 'audioinput' && device.label === preferedDeviceName)
const {deviceId} = audio;
navigator.mediaDevices.getUserMedia({audio: { deviceId }}
How to process and send to the backend
Possible duplicate of this
Still not clear follow this
I am trying to use media streams with getUserMedia on Chrome on Android. To test, I worked up the script below which simply connects the input stream to the output. This code works as-expected on Chrome under Windows, but on Android I do not hear anything. The user is prompted to allow for microphone access, but no audio comes out of the speaker, handset speaker, or headphone jack.
navigator.webkitGetUserMedia({
video: false,
audio: true
}, function (stream) {
var audioContext = new webkitAudioContext();
var input = audioContext.createMediaStreamSource(stream);
input.connect(audioContext.destination);
});
In addition, the feedback beeps when rolling the volume up and down do not sound, as if Chrome is playing back audio to the system.
Is it true that this functionality isn't supported on Chrome for Android yet? The following questions are along similar lines, but neither really have a definitive answer or explanation.
HTML5 audio recording not woorking in Google Nexus
detecting support for getUserMedia on Android browser fails
As I am new to using getUserMedia, I wanted to make sure there wasn't something I was doing in my code that could break compatibility.
I should also note that this problem doesn't seem to apply to getUserMedia itself. It is possible to use getUserMedia in an <audio> tag, as demonstrated by this code (utilizes jQuery):
navigator.webkitGetUserMedia({
video: false,
audio: true
}, function (stream) {
$('body').append(
$('<audio>').attr('autoplay', 'true').attr('src', webkitURL.createObjectURL(stream))
);
});
Chrome on Android now properly supports getUserMedia. I suspect that this originally had something to do with the difference in sample rate between recording and playback (which exhibits the same issue on desktop Chrome). In any case, all started working some time on the latest stable around February 2014.
Neither Safari or Firefox are able to process audio data from a MediaElementSource using the Web Audio API.
var audioContext, audioProcess, audioSource,
result = document.createElement('h3'),
output = document.createElement('span'),
mp3 = '//www.jonathancoulton.com/wp-content/uploads/encodes/Smoking_Monkey/mp3/09_First_of_May_mp3_3a69021.mp3',
ogg = '//upload.wikimedia.org/wikipedia/en/4/45/ACDC_-_Back_In_Black-sample.ogg',
gotData = false, data, audio = new Audio();
function connect() {
audioContext = window.AudioContext ? new AudioContext() : new webkitAudioContext(),
audioSource = audioContext.createMediaElementSource( audio ),
audioScript = audioContext.createScriptProcessor( 2048 );
audioSource.connect( audioScript );
audioSource.connect( audioContext.destination );
audioScript.connect( audioContext.destination );
audioScript.addEventListener('audioprocess', function(e){
if ((data = e.inputBuffer.getChannelData(0)[0]*3)) {
output.innerHTML = Math.abs(data).toFixed(3);
if (!gotData) gotData = true;
}
}, false);
}
(function setup(){
audio.volume = 1/3;
audio.controls = true;
audio.autoplay = true;
audio.src = audio.canPlayType('audio/mpeg') ? mp3 : ogg;
audio.addEventListener('canplay', connect);
result.innerHTML = 'Channel Data: ';
output.innerHTML = '0.000';
document.body.appendChild(result).appendChild(output);
document.body.appendChild(audio);
})();
Are there any plans to patch this in the near future? Or is there some work-around that would still provide the audio controls to the user?
For Apple, this something that could be fixed in the WebKit Nightlies or will we have to wait until Safari 8.0 release to get HTML5 <audio> playing nicely with the Web Audio API? The Web Audio API has existed in Safari since at least version 6.0 and I initially posted this question long before Safari 7.0 was released. Is there a reason this wasn't fixed already? Will it ever be fixed?
For Mozilla, I know you're still in the process of switching over from the old Audio Data API, but is this a known issue with your Web Audio implementation and is it going to be fixed before the next release of Firefox?
This answer is quoted almost exactly from my answer to a related question: Firefox 25 and AudioContext createJavaScriptNote not a function
Firefox does support MediaElementSource if the media adheres to the Same-Origin Policy, however there is no error produced by Firefox when attempting to use media from a remote origin.
The specification is not really specific about it (pun intended), but I've been told that this is an intended behavior, and the issue is actually with Chrome… It's the Blink implementations (Chrome, Opera) that need to be updated to require CORS.
MediaElementSource Node and Cross-Origin Media Resources:
From: Robert O'Callahan <robert#ocallahan.org>
Date: Tue, 23 Jul 2013 16:30:00 +1200
To: "public-audio#w3.org" <public-audio#w3.org>
HTML media elements can play media resources from any origin. When an
element plays a media resource from an origin different from the page's
origin, we must prevent page script from being able to read the contents of
the media (e.g. extract video frames or audio samples). In particular we
should prevent ScriptProcessorNodes from getting access to the media's
audio samples. We should also information about samples leaking in other
ways (e.g. timing channel attacks). Currently the Web Audio spec says
nothing about this.
I think we should solve this by preventing any non-same-origin data from
entering Web Audio. That will minimize the attack surface and the impact on
Web Audio.
My proposal is to make MediaElementAudioSourceNode convert data coming from
a non-same origin stream to silence.
If this proposal makes it into spec it will be nearly impossible for a developer to even realize why his MediaElementSource is not working. As it stands right now, calling createMediaElementSource() on an <audio> element in Firefox 26 actually stops the <audio> controls from working at all and throws no errors.
What dangerous things can you do with the audio/video data from a remote origin? The general idea is that without applying the Same-Origin Policy to a MediaElementSource node, some malicious javascript could access media that only the user should have access to (session, vpn, local server, network drives) and send its contents—or some representation of it—to an attacker.
The HTML5 media elements don't have these restrictions by default. You can include remote media across all browsers by using the <audio>, <img>, or <video> elements. It's only when you want to manipulate or extract the data from these remote resources that the Same-Origin Policy comes into play.
[It's] for the same reason that you cannot dump image data cross-origin via <canvas>: media may contain sensitive information and therefore allowing rogue sites to dump and re-route content is a security issue. - #nmaier
createMediaElementSource() does not work properly in Safari 8.0.5 (and possibly earlier) but is fixed in Webkit Nightly as of 10600.5.17, r183978