I'm aware of the MediaRecorder API and how to record screen/audio/video, and then download those recordings. I'm also aware of npm modules such as react-media-recorder that leverage that API.
I would like to record a rolling n-second window of screen recording, to allow the user to create clips and then be able to share those clips. I cannot record the entire session as I don't know how long they will last, meaning I don't know how big the recordings might get (I assume there is a limit to what the recording can have in memory.)
Is there any easy way to use MediaRecorder to record a rolling window (i.e. to always have in memory the last 30 seconds recorded)?
I spent quite a while trying to make this work. Unfortunately, the only solution that works for me involves making 30 recorders.
The naive solution to this problem is to call recorder.start(1000) to record data in one second intervals, then maintain a circular buffer on the dataavailable event. The issue with this is that MediaRecorder supports a very, very limited number of encodings. None of these encodings allow data packets to be dropped from the beginning, since they contain important metadata. With better understanding of the protocols, I'm sure that it is to some extent possible to make this strategy work. However, simply concatenating the packets together (when some are missing) does not create a valid file.
Another attempt I made used two MediaRecorder objects at once. One of them would record second-long start packets, and the other would record regular data packets. When taking a clip, this then combined a start packet from the first recorder with the packets from the second. However, this usually resulted in corrupted recordings.
This solution is not fantastic, but it does work: the idea is to keep 30 MediaRecorder objects, each offset by one second. For the sake of this demo, the clips are 5 seconds long, not 30:
<canvas></canvas><button>Clip!</button>
<style>
canvas, video, button {
display: block;
}
</style>
<!-- draw to the canvas to create a stream for testing -->
<script>
const canvas = document.querySelector('canvas');
const ctx = canvas.getContext('2d');
// fill background with white
ctx.fillStyle = 'white';
ctx.fillRect(0, 0, canvas.width, canvas.height);
// randomly draw stuff
setInterval(() => {
const x = Math.floor(Math.random() * canvas.width);
const y = Math.floor(Math.random() * canvas.height);
const radius = Math.floor(Math.random() * 30);
ctx.beginPath();
ctx.arc(x, y, radius, 0, Math.PI * 2);
ctx.stroke();
}, 100);
</script>
<!-- actual recording -->
<script>
// five second clips
const LENGTH = 5;
const codec = 'video/webm;codecs=vp8,opus'
const stream = canvas.captureStream();
// circular buffer of recorders
let head = 0;
const recorders = new Array(LENGTH)
.fill()
.map(() => new MediaRecorder(stream, { mimeType: codec }));
// start them all
recorders.forEach((recorder) => recorder.start());
let data = undefined;
recorders.forEach((r) => r.addEventListener('dataavailable', (e) => {
data = e.data;
}));
setInterval(() => {
recorders[head].stop();
recorders[head].start();
head = (head + 1) % LENGTH;
}, 1000);
// download the data
const download = () => {
if (data === undefined) return;
const url = URL.createObjectURL(data);
// download the url
const a = document.createElement('a');
a.download = 'test.webm';
a.href = url;
a.click();
URL.revokeObjectURL(url);
};
// stackoverflow doesn't allow downloads
// we show the clip instead
const show = () => {
if (data === undefined) return;
const url = URL.createObjectURL(data);
// display url in new video element
const v = document.createElement('video');
v.src = url;
v.controls = true;
document.body.appendChild(v);
};
document.querySelector('button').addEventListener('click', show);
</script>
Related
I'm working on a client-side project which lets a user supply a video file and apply basic manipulations to it. I'm trying to extract the frames from the video reliably. At the moment I have a <video> which I'm loading selected video into, and then pulling out each frame as follows:
Seek to the beginning
Pause the video
Draw <video> to a <canvas>
Capture the frame from the canvas with .toDataUrl()
Seek forward by 1 / 30 seconds (1 frame).
Rinse and repeat
This is a rather inefficient process, and more specifically, is proving unreliable as I'm often getting stuck frames. This seems to be from it not updating the actual <video> element before it draws to the canvas.
I'd rather not have to upload the original video to the server just to split the frames, and then download them back to the client.
Any suggestions for a better way to do this are greatly appreciated. The only caveat is that I need it to work with any format the browser supports (decoding in JS isn't a great option).
[2021 update]: Since this question (and answer) has first been posted, things have evolved in this area, and it is finally time to make an update; the method that was exposed here went out-of-date, but luckily a few new or incoming APIs can help us better in extracting video frames:
The most promising and powerful one, but still under development, with a lot of restrictions: WebCodecs
This new API unleashes access to the media decoders and encoders, enabling us to access raw data from video frames (YUV planes), which may be a lot more useful for many applications than rendered frames; and for the ones who need rendered frames, the VideoFrame interface that this API exposes can be drawn directly to a <canvas> element or converted to an ImageBitmap, avoiding the slow route of the MediaElement.
However there is a catch, apart from its current low support, this API needs that the input has been demuxed already.
There are some demuxers online, for instance for MP4 videos GPAC's mp4box.js will help a lot.
A full example can be found on the proposal's repo.
The key part consists of
const decoder = new VideoDecoder({
output: onFrame, // the callback to handle all the VideoFrame objects
error: e => console.error(e),
});
decoder.configure(config); // depends on the input file, your demuxer should provide it
demuxer.start((chunk) => { // depends on the demuxer, but you need it to return chunks of video data
decoder.decode(chunk); // will trigger our onFrame callback
})
Note that we can even grab the frames of a MediaStream, thanks to MediaCapture Transform's MediaStreamTrackProcessor.
This means that we should be able to combine HTMLMediaElement.captureStream() and this API in order to get our VideoFrames, without the need for a demuxer. However this is true only for a few codecs, and it means that we will extract frames at reading speed...
Anyway, here is an example working on latest Chromium based browsers, with chrome://flags/#enable-experimental-web-platform-features switched on:
const frames = [];
const button = document.querySelector("button");
const select = document.querySelector("select");
const canvas = document.querySelector("canvas");
const ctx = canvas.getContext("2d");
button.onclick = async(evt) => {
if (window.MediaStreamTrackProcessor) {
let stopped = false;
const track = await getVideoTrack();
const processor = new MediaStreamTrackProcessor(track);
const reader = processor.readable.getReader();
readChunk();
function readChunk() {
reader.read().then(async({ done, value }) => {
if (value) {
const bitmap = await createImageBitmap(value);
const index = frames.length;
frames.push(bitmap);
select.append(new Option("Frame #" + (index + 1), index));
value.close();
}
if (!done && !stopped) {
readChunk();
} else {
select.disabled = false;
}
});
}
button.onclick = (evt) => stopped = true;
button.textContent = "stop";
} else {
console.error("your browser doesn't support this API yet");
}
};
select.onchange = (evt) => {
const frame = frames[select.value];
canvas.width = frame.width;
canvas.height = frame.height;
ctx.drawImage(frame, 0, 0);
};
async function getVideoTrack() {
const video = document.createElement("video");
video.crossOrigin = "anonymous";
video.src = "https://upload.wikimedia.org/wikipedia/commons/a/a4/BBH_gravitational_lensing_of_gw150914.webm";
document.body.append(video);
await video.play();
const [track] = video.captureStream().getVideoTracks();
video.onended = (evt) => track.stop();
return track;
}
video,canvas {
max-width: 100%
}
<button>start</button>
<select disabled>
</select>
<canvas></canvas>
The easiest to use, but still with relatively poor browser support, and subject to the browser dropping frames: HTMLVideoElement.requestVideoFrameCallback
This method allows us to schedule a callback to whenever a new frame will be painted on the HTMLVideoElement.
It is higher level than WebCodecs, and thus may have more latency, and moreover, with it we can only extract frames at reading speed.
const frames = [];
const button = document.querySelector("button");
const select = document.querySelector("select");
const canvas = document.querySelector("canvas");
const ctx = canvas.getContext("2d");
button.onclick = async(evt) => {
if (HTMLVideoElement.prototype.requestVideoFrameCallback) {
let stopped = false;
const video = await getVideoElement();
const drawingLoop = async(timestamp, frame) => {
const bitmap = await createImageBitmap(video);
const index = frames.length;
frames.push(bitmap);
select.append(new Option("Frame #" + (index + 1), index));
if (!video.ended && !stopped) {
video.requestVideoFrameCallback(drawingLoop);
} else {
select.disabled = false;
}
};
// the last call to rVFC may happen before .ended is set but never resolve
video.onended = (evt) => select.disabled = false;
video.requestVideoFrameCallback(drawingLoop);
button.onclick = (evt) => stopped = true;
button.textContent = "stop";
} else {
console.error("your browser doesn't support this API yet");
}
};
select.onchange = (evt) => {
const frame = frames[select.value];
canvas.width = frame.width;
canvas.height = frame.height;
ctx.drawImage(frame, 0, 0);
};
async function getVideoElement() {
const video = document.createElement("video");
video.crossOrigin = "anonymous";
video.src = "https://upload.wikimedia.org/wikipedia/commons/a/a4/BBH_gravitational_lensing_of_gw150914.webm";
document.body.append(video);
await video.play();
return video;
}
video,canvas {
max-width: 100%
}
<button>start</button>
<select disabled>
</select>
<canvas></canvas>
For your Firefox users, Mozilla's non-standard HTMLMediaElement.seekToNextFrame()
As its name implies, this will make your <video> element seek to the next frame.
Combining this with the seeked event, we can build a loop that will grab every frame of our source, faster than reading speed (yeah!).
But this method is proprietary, available only in Gecko based browsers, not on any standard tracks, and probably gonna be removed in the future when they'll implement the methods exposed above.
But for the time being, it is the best option for Firefox users:
const frames = [];
const button = document.querySelector("button");
const select = document.querySelector("select");
const canvas = document.querySelector("canvas");
const ctx = canvas.getContext("2d");
button.onclick = async(evt) => {
if (HTMLMediaElement.prototype.seekToNextFrame) {
let stopped = false;
const video = await getVideoElement();
const requestNextFrame = (callback) => {
video.addEventListener("seeked", () => callback(video.currentTime), {
once: true
});
video.seekToNextFrame();
};
const drawingLoop = async(timestamp, frame) => {
if(video.ended) {
select.disabled = false;
return; // FF apparently doesn't like to create ImageBitmaps
// from ended videos...
}
const bitmap = await createImageBitmap(video);
const index = frames.length;
frames.push(bitmap);
select.append(new Option("Frame #" + (index + 1), index));
if (!video.ended && !stopped) {
requestNextFrame(drawingLoop);
} else {
select.disabled = false;
}
};
requestNextFrame(drawingLoop);
button.onclick = (evt) => stopped = true;
button.textContent = "stop";
} else {
console.error("your browser doesn't support this API yet");
}
};
select.onchange = (evt) => {
const frame = frames[select.value];
canvas.width = frame.width;
canvas.height = frame.height;
ctx.drawImage(frame, 0, 0);
};
async function getVideoElement() {
const video = document.createElement("video");
video.crossOrigin = "anonymous";
video.src = "https://upload.wikimedia.org/wikipedia/commons/a/a4/BBH_gravitational_lensing_of_gw150914.webm";
document.body.append(video);
await video.play();
return video;
}
video,canvas {
max-width: 100%
}
<button>start</button>
<select disabled>
</select>
<canvas></canvas>
The least reliable, that did stop working over time: HTMLVideoElement.ontimeupdate
The strategy pause - draw - play - wait for timeupdate used to be (in 2015) a quite reliable way to know when a new frame got painted to the element, but since then, browsers have put serious limitations on this event which was firing at great rate and now there isn't much information we can grab from it...
I am not sure I can still advocate for its use, I didn't check how Safari (which is currently the only one without a solution) handles this event (their handling of medias is very weird for me), and there is a good chance that a simple setTimeout(fn, 1000 / 30) loop is actually more reliable in most of the cases.
Here's a working function that was tweaked from this question:
async function extractFramesFromVideo(videoUrl, fps = 25) {
return new Promise(async (resolve) => {
// fully download it first (no buffering):
let videoBlob = await fetch(videoUrl).then((r) => r.blob());
let videoObjectUrl = URL.createObjectURL(videoBlob);
let video = document.createElement("video");
let seekResolve;
video.addEventListener("seeked", async function () {
if (seekResolve) seekResolve();
});
video.src = videoObjectUrl;
// workaround chromium metadata bug (https://stackoverflow.com/q/38062864/993683)
while (
(video.duration === Infinity || isNaN(video.duration)) &&
video.readyState < 2
) {
await new Promise((r) => setTimeout(r, 1000));
video.currentTime = 10000000 * Math.random();
}
let duration = video.duration;
let canvas = document.createElement("canvas");
let context = canvas.getContext("2d");
let [w, h] = [video.videoWidth, video.videoHeight];
canvas.width = w;
canvas.height = h;
let frames = [];
let interval = 1 / fps;
let currentTime = 0;
while (currentTime < duration) {
video.currentTime = currentTime;
await new Promise((r) => (seekResolve = r));
context.drawImage(video, 0, 0, w, h);
let base64ImageData = canvas.toDataURL();
frames.push(base64ImageData);
currentTime += interval;
}
resolve(frames);
});
}
Usage:
let frames = await extractFramesFromVideo("https://example.com/video.webm");
Note that there's currently no easy way to determine the actual/natural frame rate of a video unless perhaps you use ffmpeg.js, but that's a 10+ megabyte javascript file (since it's an emscripten port of the actual ffmpeg library, which is obviously huge).
2023 answer:
If you want to extract all frames reliably (i.e. no "seeking" and missing frames), and do so as fast as possible (i.e. not limited by playback speed or other factors) then you probably want to use the WebCodecs API. As of writing it's supported in Chrome and Edge. Other browsers will soon follow - hopefully by the end of 2023 there will be wide support.
I put together a simple library for this, but it currently only supports mp4 files. Here's an example:
<canvas id="canvasEl"></canvas>
<script type="module">
import getVideoFrames from "https://deno.land/x/get_video_frames#v0.0.8/mod.js"
let ctx = canvasEl.getContext("2d");
// `getVideoFrames` requires a video URL as input.
// If you have a file/blob instead of a videoUrl, turn it into a URL like this:
let videoUrl = URL.createObjectURL(fileOrBlob);
await getVideoFrames({
videoUrl,
onFrame(frame) { // `frame` is a VideoFrame object: https://developer.mozilla.org/en-US/docs/Web/API/VideoFrame
ctx.drawImage(frame, 0, 0, canvasEl.width, canvasEl.height);
frame.close();
},
onConfig(config) {
canvasEl.width = config.codedWidth;
canvasEl.height = config.codedHeight;
},
});
URL.revokeObjectURL(fileOrBlob); // revoke URL to prevent memory leak
</script>
Demo: https://jsbin.com/mugoguxiha/edit?html,output
Github: https://github.com/josephrocca/getVideoFrames.js
(Note that the WebCodecs API is mentioned in #Kaiido's excellent answer, but this API alone unfortunately doesn't solve the issue - the example above uses mp4box.js to handle the stuff that the WebCodecs doesn't handle. Perhaps WebCodecs will eventually support the container side of things and this answer will become mostly irrelevant, but until then I hope that this is useful.)
i made this webapp to compose music, i wanted to add a feature to download the composition as .mp3/wav/whateverFileFormatPossible, i've been searching on how to do this for many times and always gave up as i couldn't find any examples on how to do it, only things i found were microphone recorders but i want to record the final audio destination of the website.
I play audio in this way:
const a_ctx = new(window.AudioContext || window.webkitAudioContext)()
function playAudio(buf){
const source = a_ctx.createBufferSource()
source.buffer = buf
source.playbackRate.value = pitchKey;
//Other code to modify the audio like adding reverb and changing volume
source.start(0)
}
where buf is the AudioBuffer.
To sum up, i want to record the whole window audio but can't come up with a way.
link to the whole website code on github
Maybe you could use the MediaStream Recording API (https://developer.mozilla.org/en-US/docs/Web/API/MediaStream_Recording_API):
The MediaStream Recording API, sometimes simply referred to as the Media Recording API or the MediaRecorder API, is closely affiliated with the Media Capture and Streams API and the WebRTC API. The MediaStream Recording API makes it possible to capture the data generated by a MediaStream or HTMLMediaElement object for analysis, processing, or saving to disk. It's also surprisingly easy to work with.
Also, you may take a look at this topic: new MediaRecorder(stream[, options]) stream can living modify?. It seems to discuss a related issue and might provide you with some insights.
The following code generates some random noise, applies some transform, plays it and creates an audio control, which allows the noise to be downloaded from the context menu via "Save audio as..." (I needed to change the extension of the saved file to .wav in order to play it.)
<html>
<head>
<script>
const context = new(window.AudioContext || window.webkitAudioContext)()
async function run()
{
var myArrayBuffer = context.createBuffer(2, context.sampleRate, context.sampleRate);
// Fill the buffer with white noise;
// just random values between -1.0 and 1.0
for (var channel = 0; channel < myArrayBuffer.numberOfChannels; channel++) {
// This gives us the actual array that contains the data
var nowBuffering = myArrayBuffer.getChannelData(channel);
for (var i = 0; i < myArrayBuffer.length; i++) {
// audio needs to be in [-1.0; 1.0]
nowBuffering[i] = Math.random() * 2 - 1;
}
}
playAudio(myArrayBuffer)
}
function playAudio(buf){
const streamNode = context.createMediaStreamDestination();
const stream = streamNode.stream;
const recorder = new MediaRecorder( stream );
const chunks = [];
recorder.ondataavailable = evt => chunks.push( evt.data );
recorder.onstop = evt => exportAudio( new Blob( chunks ) );
const source = context.createBufferSource()
source.onended = () => recorder.stop();
source.buffer = buf
source.playbackRate.value = 0.2
source.connect( streamNode );
source.connect(context.destination);
source.start(0)
recorder.start();
}
function exportAudio( blob ) {
const aud = new Audio( URL.createObjectURL( blob ) );
aud.controls = true;
document.body.prepend( aud );
}
</script>
</head>
<body onload="javascript:run()">
<input type="button" onclick="context.resume()" value="play"/>
</body>
</html>
Is this what you were looking for?
I'm building a simple looper, to help me come to an understanding of the Web Audio API however struggling to to get a buffer source to play back the recorded audio.
The code has been simplified as much as possible however with annotation it's still 70+ lines, ommitting the CSS and HTML, so apologies for that. A version including the CSS and HTML can be found on JSFiddle:
https://jsfiddle.net/b5w9j4yk/10/
Any help would be much appreciated. Thank you :)
// Aim of the code is to record the input from the mike to a float32 array. then prass that to a buffer which is linked to a buffer source, so the audio can be played back.
// Grab DOM Elements
const playButton = document.getElementById('play');
const recordButton = document.getElementById('record');
// If allowed access to microphone run this code
const promise = navigator.mediaDevices.getUserMedia({audio: true, video: false})
.then((stream) => {
recordButton.addEventListener('click', () => {
// when the record button is pressed clear enstanciate the record buffer
if (!recordArmed) {
recordArmed = true;
recordButton.classList.add('on');
console.log('recording armed')
recordBuffer = new Float32Array(audioCtx.sampleRate * 10);
}
else {
recordArmed = false;
recordButton.classList.remove('on');
// After the recording has stopped pass the recordBuffer the source's buffer
myArrayBuffer.copyToChannel(recordBuffer, 0);
//Looks like the buffer has been passed
console.log(myArrayBuffer.getChannelData(0));
}
});
// this should stat the playback of the source intended to be used adter the audio has been recorded, I can't get it to work in this given context
playButton.addEventListener('click', () => {
playButton.classList.add('on');
source.start();
});
//Transport variables
let recordArmed = false;
let playing = false;
// this buffer will later be assigned a Float 32 Array / I'd like to keep this intimediate buffer so the audio can be sliced and minipulated with ease later
let recordBuffer;
// Declear Context, input source and a block processor to pass the input sorce to the recordBuffer
const audioCtx = new AudioContext();
const audioIn = audioCtx.createMediaStreamSource(stream);
const processor = audioCtx.createScriptProcessor(512, 1, 1);
// Create a source and corrisponding buffer for playback and then assign link
const myArrayBuffer = audioCtx.createBuffer(1, audioCtx.sampleRate * 10, audioCtx.sampleRate);
const source = audioCtx.createBufferSource();
source.buffer = myArrayBuffer;
// Audio Routing
audioIn.connect(processor);
source.connect(audioCtx.destination);
// When recording is armed pass the samples of the block one at a time to the record buffer
processor.onaudioprocess = ((audioProcessingEvent) => {
let inputBuffer = audioProcessingEvent.inputBuffer;
let i = 0;
if (recordArmed) {
for (let channel = 0; channel < inputBuffer.numberOfChannels; channel++) {
let inputData = inputBuffer.getChannelData(channel);
let avg = 0;
inputData.forEach(sample => {
recordBuffer.set([sample], i);
i++;
});
}
}
else {
i = 0;
}
});
})
I am trying to debug a problem I'm having with OpenCV.js. I am trying to create a simple circle-finding function, but my video feed is being displayed in my canvas. I've boiled it down to the smallest set that shows the issue.
What makes no sense is that I create a new, empty matrix and display it, and I see my video feed in it.
I start with the typical way of detecting circles: Stream the video into matrix scrMat, convert srcMat into a grayscale grayMat, and then call HoughCircles to detect circles from grayMat into circlesMat.
Then, independently, I create a new displayMat and display it.
I see the output below, where the right-handside is displayMat.
Somehow displayMat is being filled. The effect goes away if I comment out the HoughCircles line.
How is this happening?
const cv = require('opencv.js'); // v1.2.1
const video = document.getElementById('video');
const width = 300;
const height = 225;
const FPS = 30;
let stream;
let srcMat = new cv.Mat(height, width, cv.CV_8UC4);
let grayMat = new cv.Mat(height, width, cv.CV_8UC1);
let circlesMat = new cv.Mat();
const cap = new cv.VideoCapture(video);
export default function capture() {
navigator.mediaDevices.getUserMedia({ video: true, audio: false })
.then(_stream => {
stream = _stream;
video.srcObject = stream;
video.play();
setTimeout(processVideo, 0)
})
.catch(err => console.log(`An error occurred: ${err}`));
function processVideo () {
const begin = Date.now();
// these next three lines shouldn't affect displayMat
cap.read(srcMat);
cv.cvtColor(srcMat, grayMat, cv.COLOR_RGBA2GRAY);
// if this line is commented out, the effect goes away
cv.HoughCircles(grayMat, circlesMat, cv.HOUGH_GRADIENT, 1, 45, 75, 40, 0, 0);
// this ought to simply create a new matrix and draw it
let displayMat = new cv.Mat(height, width, cv.CV_8UC1);
cv.imshow('canvasOutput', displayMat);
const delay = 1000/FPS - (Date.now() - begin);
setTimeout(processVideo, delay);
}
}
Most probably displayMat is created in memory place where some image processing was done with HoughCircles() or something. That memory was released and became available for allocating new objects in it, but neither its freeing nor new Mat creation did not clear that memory block.
So just clean the displayMat first, as it is constructed on place of some "garbage" that left from previous operations, or use cv.Mat.zeros() to construct displayMat (zeros() fills the whole new matrix buffer with zeros).
let displayMat = cv.Mat.zeros(height, width, cv.CV_8UC1);
I am attempting to use a ChannelSplitter node to send an audio signal into both a ChannelMerger node and to the destination, and then trying to use the ChannelMerger node to merge two different audio signals (one from the split source, one from the microphone using getUserMedia) into a recorder using Recorder.js.
I keep getting the following error: "Uncaught SyntaxError: An invalid or illegal string was specified."
The error is at the following line of code:
audioSource.splitter.connect(merger);
Where audioSource is an instance of ThreeAudio.Source from the library ThreeAudio.js, splitter is a channel splitter I instantiated myself by modifying the prototype, and merger is my merger node. The code that precedes it is:
merger = context.createChannelMerger(2);
userInput.connect(merger);
Where userInput is the stream from the user's microphone. That one connects without throwing an error. Sound is getting from the audioSource to the destination (I can hear it), so it doesn't seem like the splitter is necessarily wrong - I just can't seem to connect it.
Does anyone have any insight?
I was struggling to understand the ChannelSplitterNode and ChannelMergerNode API. Finally I find the missing part, the 2nd and 3rd optional parameters of the connect() method - input and output channel.
connect(destinationNode: AudioNode, output?: number, input?: number): AudioNode;
When using the connect() method with Splitter or Merger nodes, spacify the input/output channel. This is how you split and Merge to audio data.
You can see in this example how I load audio data, split it into 2 channels, and control the left/right output. Notice the 2nd and 3rd parameter of the connect() method:
const audioUrl = "https://s3-us-west-2.amazonaws.com/s.cdpn.io/858/outfoxing.mp3";
const audioElement = new Audio(audioUrl);
audioElement.crossOrigin = "anonymous"; // cross-origin - if file is stored on remote server
const audioContext = new AudioContext();
const audioSource = audioContext.createMediaElementSource(audioElement);
const volumeNodeL = new GainNode(audioContext);
const volumeNodeR = new GainNode(audioContext);
volumeNodeL.gain.value = 2;
volumeNodeR.gain.value = 2;
const channelsCount = 2; // or read from: 'audioSource.channelCount'
const splitterNode = new ChannelSplitterNode(audioContext, { numberOfOutputs: channelsCount });
const mergerNode = new ChannelMergerNode(audioContext, { numberOfInputs: channelsCount });
audioSource.connect(splitterNode);
splitterNode.connect(volumeNodeL, 0); // connect OUTPUT channel 0
splitterNode.connect(volumeNodeR, 1); // connect OUTPUT channel 1
volumeNodeL.connect(mergerNode, 0, 0); // connect INPUT channel 0
volumeNodeR.connect(mergerNode, 0, 1); // connect INPUT channel 1
mergerNode.connect(audioContext.destination);
let isPlaying;
function playPause() {
// check if context is in suspended state (autoplay policy)
if (audioContext.state === 'suspended') {
audioContext.resume();
}
isPlaying = !isPlaying;
if (isPlaying) {
audioElement.play();
} else {
audioElement.pause();
}
}
function setBalance(val) {
volumeNodeL.gain.value = 1 - val;
volumeNodeR.gain.value = 1 + val;
}
<h3>Try using headphones</h3>
<button onclick="playPause()">play/pause</button>
<br><br>
<button onclick="setBalance(-1)">Left</button>
<button onclick="setBalance(0)">Center</button>
<button onclick="setBalance(+1)">Right</button>
P.S: The audio track isn't a real stereo track, but a left and right copy of the same Mono playback. You can try this example with a real stereo playback for a real balance effect.
Here's some working splitter/merger code that creates a ping-pong delay - that is, it sets up separate delays on the L and R channels of a stereo signal, and crosses over the feedback. This is from my input effects demo on webaudiodemos.appspot.com (code on github).
var merger = context.createChannelMerger(2);
var leftDelay = context.createDelayNode();
var rightDelay = context.createDelayNode();
var leftFeedback = audioContext.createGainNode();
var rightFeedback = audioContext.createGainNode();
var splitter = context.createChannelSplitter(2);
// Split the stereo signal.
splitter.connect( leftDelay, 0 );
// If the signal is dual copies of a mono signal, we don't want the right channel -
// it will just sound like a mono delay. If it was a real stereo signal, we do want
// it to just mirror the channels.
if (isTrueStereo)
splitter.connect( rightDelay, 1 );
leftDelay.delayTime.value = delayTime;
rightDelay.delayTime.value = delayTime;
leftFeedback.gain.value = feedback;
rightFeedback.gain.value = feedback;
// Connect the routing - left bounces to right, right bounces to left.
leftDelay.connect(leftFeedback);
leftFeedback.connect(rightDelay);
rightDelay.connect(rightFeedback);
rightFeedback.connect(leftDelay);
// Re-merge the two delay channels into stereo L/R
leftFeedback.connect(merger, 0, 0);
rightFeedback.connect(merger, 0, 1);
// Now connect your input to "splitter", and connect "merger" to your output destination.