I am having issues with using the webAudio API with javascript.
The problem is that I am hearing glitches on the sounds being played in my browser even though I have used a gainNode to gradually increase/decrease the sound when it starts/stops.
The audio file is simply 60 seconds of 400hz tone to demonstrate the issue. In the demo I play a snippet from time point 2.0 seconds for 1 second duration, within this duration I ramp up for 100ms and at 800ms I begin to ramp down for 199ms. This is an attempt to avoid a non zero crossing glitch. I use gainNode.gain.setTargetAtTime() but also tried exponentialRampToValueAtTime() as well. In this example I repeat at time point 52 seconds.
At the beginning of the code I impliment an audioContext.resume() to trigger the audio facility of the browser.
<!DOCTYPE html>
<html>
<head>
<title>My experiment</title>
<audio id="audio" src="pure_400Hz_tone.ogg" preload="auto"></audio>
</head>
<body>
<div id="jspsych_target"></div>
<button onclick="dummyPress()">Press to Activate Audio</button>
<button onclick="playTheTones()">sound the tone</button>
</body>
<script>
console.log("setting up audiocontext at ver 28 ");
const audioContext = new AudioContext();
const element = document.querySelector("audio");
const source = audioContext.createMediaElementSource(element);
const gainNode = audioContext.createGain();
gainNode.gain.setValueAtTime(0, audioContext.currentTime);
source.connect(gainNode);
gainNode.connect(audioContext.destination);
function dummyPress(){
audioContext.resume();
playTheTones();
};
function playTheTones(){
// ******* The First Tone ***********
// **********************************
source.currentTime = 2;
gainNode.gain.setTargetAtTime(1.0, audioContext.currentTime, 0.1);
var g = setTimeout(function(){
gainNode.gain.setTargetAtTime(0.0001, audioContext.currentTime, 0.199);
console.log("start Down # " + source.currentTime);
},800);
source.mediaElement.play();
console.log("PLAYING 2 now # " + source.currentTime);
var k = setTimeout(function(){
source.mediaElement.pause();
console.log("STOPPED # " + source.currentTime);
},1100);
// ******* The Second Tone ***********
// **********************************
setTimeout(function(){
source.currentTime = 52;
gainNode.gain.setTargetAtTime(1.0, audioContext.currentTime, 0.1);
var h = setTimeout(function(){
gainNode.gain.setTargetAtTime(0.0001, audioContext.currentTime, 0.199);
console.log("start Down # " + source.currentTime);
},800);
source.mediaElement.play();
console.log("PLAYING 52 now # " + source.currentTime);
var j = setTimeout(function(){
source.mediaElement.pause();
console.log("STOPPED # " + source.currentTime);
},1100);
},1500);
};
</script>
</html>
Unfortunately I think I have confused myself in trying to resolve the glitch issues and may not be using best practice using the API and this might be causing my problem.
Would someone look at the code and point out if I am using the API correctly and confirm that I am correct in thinking I should be able to use the API and present tones in this way without glitching.
Thanks
I found the problem
gainNode.gain.setTargetAtTime(0.0001, audioContext.currentTime, 0.199);
the third parameter is a 'time constant' not a 'time duration' so it was mammothly large at 0.199 and the gain did not diminish rapidly enough so causing the glitch. Setting to 0.01 cures the issue !
Related
I had a few sound effects that would play as a part of my programs. This would use the simple audio.play() method in Javascript.
The problem was? Sometimes, the sound effects would be delayed BADLY relative to the user action. Like 1-3 seconds of delay time.
The file size of each audio file is very small (5KB - 50KB). So I don't think that's the issue.
I tried preloading the audio files, that didn't work.
Weirdly enough, when the sound effect would be played once, then played repeatedly after that, it would always play just fine, instantly responsive. But it's when the users would NOT trigger actions that would cause a sound effect for a long time -- or when they would switch tabs, do other stuff, then switch back -- that the sound effect would then be delayed 1-3 seconds before playing.
Here's the pertinent code:
function AudioSettings() {
SoundEffectCashRegister = new Audio("cash-register-cha-ching-sound-effect.mp3");
SoundEffectSadTrombone = new Audio("sad-trombone-sound-effect.mp3");
SoundEffectBubblePop = new Audio("quick-bubble-pop-sound-effect.mp3");
SoundEffectCashRegister.preload = 'auto';
SoundEffectSadTrombone.preload = 'auto';
SoundEffectBubblePop.preload = 'auto';
VolumeSetting = localStorage.getItem("UserSavedVolume");
document.getElementById("volume").value = (VolumeSetting * 100); // sets volume to user-selected value, on popup load
// console.log(document.getElementById("volume").value);
SoundEffectCashRegister.volume = VolumeSetting;
SoundEffectSadTrombone.volume = VolumeSetting / 1.5;
SoundEffectBubblePop.volume = VolumeSetting / 1.5;
// console.log(SoundEffectCashRegister.volume + " " + SoundEffectSadTrombone.volume + " " + SoundEffectBubblePop.volume);
volume.addEventListener("input", (e) => { // updates volume in real time, on slider adjustment -- then saves user preference to storage
SoundEffectCashRegister.volume = e.currentTarget.value / 100;
SoundEffectSadTrombone.volume = (e.currentTarget.value / 1.5) / 100;
SoundEffectBubblePop.volume = (e.currentTarget.value / 1.5) / 100;
// console.log(SoundEffectCashRegister.volume + " " + SoundEffectSadTrombone.volume + " " + SoundEffectBubblePop.volume);
localStorage.setItem("UserSavedVolume", SoundEffectCashRegister.volume);
});
}
Then each time the audio file would need to be played, I would just use the SoundEffectCashRegister.play() method, etc.
Thanks
I have this little javascript that I have working, it plays a tone for 30 seconds. I wanted to see if it was possible to modify it so that it would play at say, half volume instead (it's just too loud). I have researched and found "gain" node information, but I looked at the examples and honestly I'm a noob with javascrip and I don't fully understand the examples I saw.
Here is what I have so far (it works good, I just want to "volume" it down some)
function myFunction(frequency, duration, callback) {
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var oscillator = audioCtx.createOscillator();
duration = 30000 / 1000; // the 10000 used to be 'duration'
oscillator.type = 'square';
oscillator.frequency.value = 500; // value in hertz
oscillator.connect(audioCtx.destination);
oscillator.onended = callback;
oscillator.start(0);
oscillator.stop(audioCtx.currentTime + duration);
}
Can anyone help me modify this so I have a volume parameter I can tweak until it is right?
Documentation
The documentation for what you are trying to do can be found under BaseAudioContext, specifically the BaseAudioContext.createGain() method.
The MDN documentation is lacking a little in that it only provides snippets that will not work as is. As such, an overly simplified example is given below, bearing in mind that this may not be best practice.
Explanation
The oscillator and gain control are both audio nodes. As such, you should picture them in a signal chain like that given below.
The oscillator node passes through the gain node which passes through to the audio context.
Solution
Using your current format, as self contain snippet is given below (see jsfiddle here)
<!DOCTYPE html>
<html>
<head>
<script>
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var gainNode = audioCtx.createGain();
gainNode.connect(audioCtx.destination);
var gain = 0.1;
//---------------------------------------------------------------------
gainNode.gain.setValueAtTime(gain, audioCtx.currentTime);
</script>
</head>
<!-- ===================================================================== -->
<body>
<div>
<button onclick="myFunction()">
Click me
</button>
</div>
<script>
function myFunction()
{
//---------------------------------------------------------------------
var duration = 0.5; // in seconds
//---------------------------------------------------------------------
var oscillator = audioCtx.createOscillator();
oscillator.type = 'square';
oscillator.frequency.value = 500;
oscillator.connect(gainNode);
oscillator.start(audioCtx.currentTime);
oscillator.stop(audioCtx.currentTime + duration);
//---------------------------------------------------------------------
}
</script>
</body>
<!-- ===================================================================== -->
</html>
Best practice
For best practice, I would suggest you should avoid recreating nodes over and over again. Rather, I would simply turn the gain up for a short period, then turn it back down, an example of which is given below.
As far as I can tell, there is no envelope generator node for WebAudio, but you could use .linearRampToValueAtTime() if needed.
NOTE: This is not currently working in Safari.
(see jsfiddle here)
<!DOCTYPE html>
<html>
<head>
<script>
//---------------------------------------------------------------------
// Edit These
var gain = 0.1;
var duration = 1000;
//---------------------------------------------------------------------
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var gainNode = audioCtx.createGain();
//---------------------------------------------------------------------
var oscillator = audioCtx.createOscillator();
oscillator.type = 'square';
oscillator.frequency.value = 500;
//---------------------------------------------------------------------
oscillator.connect(gainNode);
gainNode.connect(audioCtx.destination);
//---------------------------------------------------------------------
gainNode.gain.setValueAtTime(0.0, audioCtx.currentTime); // turned off by default
oscillator.start(audioCtx.currentTime);
//---------------------------------------------------------------------
</script>
</head>
<!-- ===================================================================== -->
<body>
<div>
<button onclick="soundOn()">
Click me
</button>
</div>
<script>
function mute()
{
gainNode.gain.setValueAtTime(0.0, audioCtx.currentTime);
}
function soundOn()
{
gainNode.gain.setValueAtTime(gain, audioCtx.currentTime);
setTimeout(mute,duration);
}
</script>
</body>
<!-- ===================================================================== -->
</html>
Further Resources
If you are struggling with interacting with audio context in this was, I would suggest trying out a library such as p5.js and the p5.sound library.
Look at the https://p5js.org/reference/#/p5.Oscillator to see whether it is a more intuitive approach.
I am working on a computer vision project using OpneCV.JS and springboot using Thymeleaf as my HTML5 template engine. I have been following the OpenCJ.Js tutorial here. I am suppose to get two output, one that will display the VideoInput and the other one the canvas that will display the Canvaoutput where the face tracking will take place.
However, the Video display and its working as expected. However, the Canvas display did not show. When I inspect my code in the chrome browser, I realize that I am getting an Uncaught Reference error which says CV is not defined.
Can somebody assist to tell me if their is anything I am doing wrong in my Code.
Below is my code
<!DOCTYPE html>
<html xmlns:th="http://www.thymeleaf.org" xmlns:layout="http://www.ultraq.net.nz/web/thymeleaf/layout" layout:decorate="layout">
<head>
<script async type="text/javascript" th:src="#{/opencv/opencv.js}"></script>
<title>Self-Service Portal</title>
</head>
<body>
<h2>Trying OpenCV Javascript Computer Vision</h2>
<p id="status">Loading with OpenCV.js...</p>
<video id="video" autoplay="true" play width="300" height="225"></video> <br/>
<canvas id="canvasOutput" autoplay="true" width="300" height="225"></canvas>
<!--div>
<div class="inputoutput">
<img id="imageSrc" alt="No Image" />
<div class="caption">ImageScr<input type="file" id="fileInput" name="file" /></div>
</div>
<div class="inputoutput">
<canvas id="canvasOutput" ></canvas>
<div class="caption">canvasOutput</div>
</div>
</div-->
<script type="text/javascript">
navigator.mediaDevices.getUserMedia({
video: true,
audio: false
})
.then(function(stream) {
video.srcObject = stream;
video.play();
})
.catch(function(err) {
console.log("An error occured While accessing media! " + err);
});
let video = document.getElementById('video');
let src = new cv.Mat(video.height, video.width, cv.CV_8UC4);
let dst = new cv.Mat(video.height, video.width, cv.CV_8UC4);
let gray = new cv.Mat();
let cap = new cv.VideoCapture(video);
let faces = new cv.RectVector();
let classifier = new cv.CascadeClassifier();
//load pre-trained classifiers
classifier.load('haarcascade_frontalface_default.xml');
const FPS = 30;
function processVideo() {
try {
if (!streaming) {
// clean and stop.
src.delete();
dst.delete();
gray.delete();
faces.delete();
classifier.delete();
return;
}
let begin = Date.now();
// start processing.
cap.read(src);
src.copyTo(dst);
cv.cvtColor(dst, gray, cv.COLOR_RGBA2GRAY, 0);
// detect faces.
classifier.detectMultiScale(gray, faces, 1.1, 3, 0);
// draw faces.
for (let i = 0; i < faces.size(); ++i) {
let face = faces.get(i);
let point1 = new cv.Point(face.x, face.y);
let point2 = new cv.Point(face.x + face.width, face.y + face.height);
cv.rectangle(dst, point1, point2, [255, 0, 0, 255]);
}
cv.imshow('canvasOutput', dst);
// schedule the next one.
let delay = 1000 / FPS - (Date.now() - begin);
setTimeout(processVideo, delay);
}
catch (err) {
utils.printError(err);
}
};
//schedule the first one.
setTimeout(processVideo, 0);
</script>
<!--script async src="/opencv/opencv.js" onload="onOpenCvReady;" type="text/javascript"></script-->
</body>
</html>
The cv is not defined error can be fixed by assigning cv to the window cv object like, let cv = window.cv
Turning off async would not be ideal because the OpenCV JS library is large and it would affect the time to initial load. Maybe assign a state variable that changes when it finishes loading and run a check on this variable and update the UI accordingly
the code you find in the examples on the opencv page usually focuses on the functionality of the library and sometimes they do not have all the necessary functions to work. The examples on the opencv page usually use a utils.js file that contains the additional functions to work and this is not so obvious.
Most likely your problem will be solved with the answer of this forum.
Additionally I have created a codesandbox that contains all the necessary functions for the video examples to work. Normally they will work for you if you replace the code under the comment
/ ** Your example code here * /
by your facial recognition code.
The async on
<script async type="text/javascript" th:src="#{/opencv/opencv.js}"></script>
...means it won't hold up parsing and processing of the following markup while waiting for the file to load; details. So your inline script later in the file can run before that file is loaded and its JavaScript is run.
Remove async and move that script tag to the bottom of the document, just before your inline script that uses what it defines.
This might be a shot in the dark but I have no idea what's causing this.
I've developed a game engine with webgl. My main testing browser has been firefox and everything works perfectly. No lag or random stutters, even if I'm doing more intense things like blending with multiple framebuffers.
However, on Chrome it's a whole other story. Chrome struggles to keep a stable fps when even running the most simple tasks. I decided to create an experiment to see if the problem was in my code or in the requestAnimation loop. This is the code I ran:
<!DOCTYPE html>
<html>
<body>
<div id="fpsCounter"></div>
Lowest fps
<div id="minFps"></div>
<br>
Highest fps
<div id="maxFps"></div>
<script>
var minFps = 999;
var maxFps = 0
var fps = 0;
var last = performance.now();
var now;
var fpsUpdateTime = 20;
var fpsUpdate = 0;
var fpsCounter = document.getElementById("fpsCounter");
var minFpsEle = document.getElementById("minFps");
var maxFpsEle = document.getElementById("maxFps");
function timestamp(){
return window.performance && window.performance.now ? window.performance.now() : new Date().getTime();
}
var getMaxFps = false;
setTimeout(function(){
getMaxFps = true;
}, 2000);
function gameLoop(){
now = performance.now();
if(fpsUpdate == 0){
fps = 1000 / (now - last);
fpsUpdate = fpsUpdateTime;
}
fpsUpdate--;
fpsCounter.innerHTML = fps;
if(parseInt(fps, 10) < parseInt(minFps, 10)){
minFps = parseInt(fps, 10);
minFpsEle.innerHTML = minFps;
}
if(parseInt(fps, 10) > parseInt(maxFps, 10) && getMaxFps){
maxFps = parseInt(fps, 10);
maxFpsEle.innerHTML = maxFps;
}
last = now;
requestAnimationFrame(gameLoop);
}
gameLoop();
</script>
</body>
</html>
All the code does is loop the animation frame and put the fps into a div. On Firefox this works just as well as the whole game engine did, it keeps an average of about 58 and never dipps below 52 fps. Chrome struggles to be above 40 fps and frequently dips below 28. Oddly enough, Chrome has some frequent burst of speed, highest fps chrome got was 99 fps but thats kinda pointless since a stable 60 fps is more important.
Details:
Firefox version: 55.0.2 64-bit
Chrome version: 60.0.3112.78 (official version) 64-bit
OS: Ubuntu 16.04 LTS
Ram: 8gb
GPU: gtx 960m
Cpu: intel core i7HQ
This is how performance looks in Chrome:
I made this minimalistic html page for test:
<!DOCTYPE html>
<html>
<head>
<title>requestAnimationFrame</title>
</head>
<body>
<canvas id="canvas" width="300" height="300"></canvas>
<script>
"use strict"
// (tested in Ubuntu 18.04 and Chrome 79.0)
//
// requestAnimationFrame is not precise
// often SKIPs a frame
//
function loop() {
requestAnimationFrame(loop)
var ctx = document.getElementById("canvas").getContext("2d")
ctx.fillStyle = "red"
ctx.fillRect(100,100,200,100)
}
loop()
</script>
</body>
</html>
summary - dev tools image
There is no memory leak problem.
The scripting execution time is negligible.
fps - dev tools image
The FPS has inconsistent behaviour (running Chrome on Ubuntu).
In this test the problem was hardware acceleration.
The FPS was ok when hardware acceleration was disabled.
EDITED
I have done more tests with a page containing just a single canvas.
My conclusion is that browsers are too much complex (or buggy) and hardly run smoothly 100% of the time.
my architecture for games
var previousTimeStamp = 0
function mainLoop(timeStamp) {
if (! shallSkipLoop(timeStamp)) { gameLoop() }
requestAnimationFrame(mainLoop)
}
function gameLoop() {
// some code here
}
function shallSkipLoop(timeStamp) {
var deltaTime = timeStamp - previousTimeStamp
previousTimeStamp = timeStamp
//
// avoiding bad frame without less than 1000 / 60 ms!!!
// this happens when browser executes a frame too late
// and tries to be on time for the next screen refresh;
// but then may start a long sequence of unsynced frames:
// one very short (like 5ms) the other very long (like 120ms)
// maybe it is a bug in browser
//
return deltaTime < 16
}
requestAnimationFrame(mainLoop)
I'm building a cross-platform web app where audio is generated on-the-fly on the server and live streamed to a browser client, probably via the HTML5 audio element. On the browser, I'll have Javascript-driven animations that must precisely sync with the played audio. "Precise" means that the audio and animation must be within a second of each other, and hopefully within 250ms (think lip-syncing). For various reasons, I can't do the audio and animation on the server and live-stream the resulting video.
Ideally, there would be little or no latency between the audio generation on the server and the audio playback on the browser, but my understanding is that latency will be difficult to control and probably in the 3-7 second range (browser-, environment-, network- and phase-of-the-moon-dependent). I can handle that, though, if I can precisely measure the actual latency on-the-fly so that my browser Javascript knows when to present the proper animated frame.
So, I need to precisely measure the latency between my handing audio to the streaming server (Icecast?), and the audio coming out of the speakers on the computer hosting the speaker. Some blue-sky possibilities:
Add metadata to the audio stream, and parse it from the playing audio (I understand this isn't possible using the standard audio element)
Add brief periods of pure silence to the audio, and then detect them on the browser (can audio elements yield the actual audio samples?)
Query the server and the browser as to the various buffer depths
Decode the streamed audio in Javascript and then grab the metadata
Any thoughts as to how I could do this?
Utilize timeupdate event of <audio> element, which is fired three to four times per second, to perform precise animations during streaming of media by checking .currentTime of <audio> element. Where animations or transitions can be started or stopped up to several times per second.
If available at browser, you can use fetch() to request audio resource, at .then() return response.body.getReader() which returns a ReadableStream of the resource; create a new MediaSource object, set <audio> or new Audio() .src to objectURL of the MediaSource; append first stream chunks at .read() chained .then() to sourceBuffer of MediaSource with .mode set to "sequence"; append remainder of chunks to sourceBuffer at sourceBuffer updateend events.
If fetch() response.body.getReader() is not available at browser, you can still use timeupdate or progress event of <audio> element to check .currentTime, start or stop animations or transitions at required second of streaming media playback.
Use canplay event of <audio> element to play media when stream has accumulated adequate buffers at MediaSource to proceed with playback.
You can use an object with properties set to numbers corresponding to .currentTime of <audio> where animation should occur, and values set to css property of element which should be animated to perform precise animations.
At javascript below, animations occur at every twenty second period, beginning at 0, and at every sixty seconds until the media playback has concluded.
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta charset="utf-8" />
<title></title>
<style>
body {
width: 90vw;
height: 90vh;
background: #000;
transition: background 1s;
}
span {
font-family: Georgia;
font-size: 36px;
opacity: 0;
}
</style>
</head>
<body>
<audio controls></audio>
<br>
<span></span>
<script type="text/javascript">
window.onload = function() {
var url = "/path/to/audio";
// given 240 seconds total duration of audio
// 240/12 = 20
// properties correspond to `<audio>` `.currentTime`,
// values correspond to color to set at element
var colors = {
0: "red",
20: "blue",
40: "green",
60: "yellow",
80: "orange",
100: "purple",
120: "violet",
140: "brown",
160: "tan",
180: "gold",
200: "sienna",
220: "skyblue"
};
var body = document.querySelector("body");
var mediaSource = new MediaSource;
var audio = document.querySelector("audio");
var span = document.querySelector("span");
var color = window.getComputedStyle(body)
.getPropertyValue("background-color");
//console.log(mediaSource.readyState); // closed
var mimecodec = "audio/mpeg";
audio.oncanplay = function() {
this.play();
}
audio.ontimeupdate = function() {
// 240/12 = 20
var curr = Math.round(this.currentTime);
if (colors.hasOwnProperty(curr)) {
// set `color` to `colors[curr]`
color = colors[curr]
}
// animate `<span>` every 60 seconds
if (curr % 60 === 0 && span.innerHTML === "") {
var t = curr / 60;
span.innerHTML = t + " minute" + (t === 1 ? "" : "s")
+ " of " + Math.round(this.duration) / 60
+ " minutes of audio";
span.animate([{
opacity: 0
}, {
opacity: 1
}, {
opacity: 0
}], {
duration: 2500,
iterations: 1
})
.onfinish = function() {
span.innerHTML = ""
}
}
// change `background-color` of `body` every 20 seconds
body.style.backgroundColor = color;
console.log("current time:", curr
, "current background color:", color
, "duration:", this.duration);
}
// set `<audio>` `.src` to `mediaSource`
audio.src = URL.createObjectURL(mediaSource);
mediaSource.addEventListener("sourceopen", sourceOpen);
function sourceOpen(event) {
// if the media type is supported by `mediaSource`
// fetch resource, begin stream read,
// append stream to `sourceBuffer`
if (MediaSource.isTypeSupported(mimecodec)) {
var sourceBuffer = mediaSource.addSourceBuffer(mimecodec);
// set `sourceBuffer` `.mode` to `"sequence"`
sourceBuffer.mode = "sequence";
fetch(url)
// return `ReadableStream` of `response`
.then(response => response.body.getReader())
.then(reader => {
var processStream = (data) => {
if (data.done) {
return;
}
// append chunk of stream to `sourceBuffer`
sourceBuffer.appendBuffer(data.value);
}
// at `sourceBuffer` `updateend` call `reader.read()`,
// to read next chunk of stream, append chunk to
// `sourceBuffer`
sourceBuffer.addEventListener("updateend", function() {
reader.read().then(processStream);
});
// start processing stream
reader.read().then(processStream);
// do stuff `reader` is closed,
// read of stream is complete
return reader.closed.then(() => {
// signal end of stream to `mediaSource`
mediaSource.endOfStream();
return mediaSource.readyState;
})
})
// do stuff when `reader.closed`, `mediaSource` stream ended
.then(msg => console.log(msg))
}
// if `mimecodec` is not supported by `MediaSource`
else {
alert(mimecodec + " not supported");
}
};
}
</script>
</body>
</html>
plnkr http://plnkr.co/edit/fIm1Qp?p=preview
There no way for you to measure latency directly, but any AudioElement generate events like 'playing' if it just played (fired quite often), or 'stalled' if stoped streaming, or 'waiting' if data is loading. So what you can do, is to manipulate your video based on this events.
So play while stalled or waiting is fired, then continue playing video if playing fired again.
But I advice you check other events that might affect your flow (error for example would be important for you).
https://developer.mozilla.org/en-US/docs/Web/API/HTMLAudioElement
What i would try is first create a timestamp with performance.now, process the data, and record it in a blob with the new web recorder api.
The web recorder will ask user access to his audio card, this can be a problem for your app, but it look like mandatory to get the real latency.
As soon this done, there is many way to measure the actual latency between the generation and the actual rendering. Basically, a sound event.
For further reference and example:
Recorder demo
https://github.com/mdn/web-dictaphone/
https://developer.mozilla.org/en-US/docs/Web/API/MediaRecorder_API/Using_the_MediaRecorder_API