I’ve created a minimal WebRTC test site that is able to request the user’s webcam/audio stream, to record it, and to playback the recording after it has been stopped.
Demo: https://output.jsbin.com/tabosipefo/
Edit1: https://jsbin.com/tabosipefo/edit?html,console,output
Since this happens all within one Promise navigator.mediaDevices.getUserMedia(), I was wondering, if it is actually possible to detect and on-going stream and to (a) record it, and (b) to stop and save it.
1 WebRTC does not work in jsbin when in edit view for some reason...
If you use no framework and want to use vanilla JS, your best step is to tack the stream object to the global window.
Preview stream
const showWebcamStream = () => {
navigator.mediaDevices
.getUserMedia({ audio: true, video: true })
.then(stream => {
window.localStream = stream; // ⭠ tack it to the window object
// grab the <video> object
const video = document.querySelector("#video-preview");
video.srcObject = stream;
// Display stream
video.onloadedmetadata = () => video.play();
})
.catch(err => console.log(err.name, err.message));
};
Now the video will be displayed within the video element (id: #videp-preview).
Stop Stream(s)
const hideWebcamStream = () => localStream.getTracks().forEach(track => track.stop());
You should put the mediaRecorder in the window object in order to stop it later.
Record Stream
const startWebcamRecorder = () => {
// check if localStream is in window and if it is active
if ("localStream" in window && localStream.active) {
// save the mediaRecorder also to Window in order independently stop it
window.mediaRecorder = new MediaRecorder(localStream);
window.dataChunks = [];
mediaRecorder.start();
console.log(mediaRecorder.state);
mediaRecorder.ondataavailable = e => dataChunks.push(e.data);
}
};
Stop Recording and Preview the recording
You need another video element to playback your recording #video-playback
const stopWebcamRecorder = () => {
if ("mediaRecorder" in window && mediaRecorder.state === "recording") {
mediaRecorder.stop();
console.log(mediaRecorder.state);
mediaRecorder.onstop = () => {
let blob = new Blob(dataChunks, { type: "video/mp4;" });
dataChunks = [];
let videoURL = window.URL.createObjectURL(blob);
const videoPlayback = document.getElementById("video-playback");
videoPlayback.src = videoURL;
};
}
};
Related
The following javascript code records a sound and generates blob with audio every 0,5 second.
After recording has stopped the program plays 1-st blob - data[0].
I need the audio player to fire event after data[0] has played, and event handler will deliver the next portion to the audio player - data[1] (далее - data[2], data[3] etc.).
How can I modify the code and which objects should I use to do this ?
I know that I could pass all data[] array to the audio player, but I need a mechanism allowing the audio player to request next portions using events.
navigator.mediaDevices.getUserMedia({audio:true})
.then(function onSuccess(stream) {
const recorder = new MediaRecorder(stream);
const data = [];
recorder.ondataavailable = (e) => {
data.push(e.data);
};
recorder.start(500); // willfire 'dataavailable ' event every 0,5 second
recorder.onstop = (e) => {
const audio = document.createElement('audio');
audio.src = window.URL.createObjectURL(new Blob( data[0] ));
}
setTimeout(() => {
rec.stop();
}, 5000);
})
.catch(function onError(error) {
console.log(error.message);
});
I guess that's what your looking for ?
navigator.mediaDevices
.getUserMedia({ audio: true })
.then(function onSuccess(stream) {
// create the audio stream
const audio = document.createElement('audio');
audio.srcObject = stream; // Pass the audio stream
audio.controls = true;
audio.play();
document.body.appendChild(audio);
const recorder = new MediaRecorder(stream);
const data = [];
// Set event listener
// ondataavailable will fire when you request stop(), requestData() or after all timeSlice you give to the start function.
recorder.ondataavailable = e => data.push(e.data);
// Start recording
// Will generate blob every 500ms
recorder.start(500);
})
.catch(function onError(error) {
console.log(error.message);
});
You had some mistakes to correct :
When recorder call start event wich timeslice parameters, that will not fire the ondataavailable event. You need to stop the recorder to fire the event and create the blob.
You make a mistake on the recorder name's variable and the time on the settimeout function.
You recreate a audio player all times the recorder stop and never append it on the DOM.
I am pretty sure I did everything correct but when I try to play or download the file nothing plays. I am using web audio api to record audio from the microphone to a WAV format. I am using this library to create the .wav file. It seems like nothing is being encoded.
navigator.mediaDevices.getUserMedia({
audio: true,video:false
})
.then((stream) => {
var data
context = new AudioContext()
var source = context.createMediaStreamSource(stream)
var scriptNode = context.createScriptProcessor(8192, 1, 1)
source.connect(scriptNode)
scriptNode.connect(context.destination)
encoder = new WavAudioEncoder(16000,1)
scriptNode.onaudioprocess = function(e){
data = e.inputBuffer.getChannelData('0')
console.log(data)
encoder.encode(data)
}
$('#stop').click(()=>{
source.disconnect()
scriptNode.disconnect()
blob = encoder.finish()
console.log(blob)
url = window.URL.createObjectURL(blob)
// audio source
$('#player').attr('src',url)
// audio control
$("#pw")[0].load()
})
})
I figured it out! To help anyone who needs to do the same thing. It uses Web Audio API and this javascript library
navigator.mediaDevices.getUserMedia({
audio: true,video:false
})
.then((stream) => {
context = new AudioContext()
var source = context.createMediaStreamSource(stream)
var rec = new Recorder(source)
rec.record()
$('#stop').click(()=>{
rec.stop()
blob = rec.exportWAV(somefunction) // exportWAV() returns your file
})
use recordRTC for recording video and audio, I used in my project, it's working well, here is the code to record audio using recordrtc.org
startRecording(event) { // call this to start recording the Audio( or video or Both)
this.recording = true;
let mediaConstraints = {
audio: true
};
// Older browsers might not implement mediaDevices at all, so we set an empty object first
if (navigator.mediaDevices === undefined) {
navigator.mediaDevices = {};
}
// Some browsers partially implement mediaDevices. We can't just assign an object
// with getUserMedia as it would overwrite existing properties.
// Here, we will just add the getUserMedia property if it's missing.
if (navigator.mediaDevices.getUserMedia === undefined) {
navigator.mediaDevices.getUserMedia = function(constraints) {
// First get ahold of the legacy getUserMedia, if present
var getUserMedia = navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
// Some browsers just don't implement it - return a rejected promise with an error
// to keep a consistent interface
if (!getUserMedia) {
return Promise.reject(new Error('getUserMedia is not implemented in this browser'));
}
// Otherwise, wrap the call to the old navigator.getUserMedia with a Promise
return new Promise(function(resolve, reject) {
getUserMedia.call(navigator, constraints, resolve, reject);
});
}
}
navigator.mediaDevices.getUserMedia(mediaConstraints)
.then(successCallback.bind(this), errorCallback.bind(this));
}
successCallback(stream: MediaStream) {
var options = {
type: 'audio'
};
this.stream = stream;
this.recordRTC = RecordRTC(stream, options);
this.recordRTC.startRecording();
}
errorCallback(stream: MediaStream) {
console.log(stream);
}
stopRecording() { // call this to stop recording
this.recording = false;
this.converting = true;
let recordRTC = this.recordRTC;
if(!recordRTC) return;
recordRTC.stopRecording(this.processAudio.bind(this));
this.stream.getAudioTracks().forEach(track => track.stop());
}
processAudio(audioVideoWebMURL) {
let recordRTC = this.recordRTC;
var recordedBlob = recordRTC.getBlob(); // you can save the recorded media data in various formats, refer the link below.
console.log(recordedBlob)
this.recordRTC.save('audiorecording.wav');
let base64Data = '';
this.recordRTC.getDataURL((dataURL) => {
base64Data = dataURL.split('base64,')[1];
console.log(RecordRTC.getFromDisk('audio', function(dataURL,type) {
type == 'audio'
}));
console.log(dataURL);
})
}
Note that you cannot record the audio/video from the live site in Google Chrome unless your site is https enabled
I am using MediaStream Recording API to record audio in the browser, like this (courtesy https://github.com/bryanjenningz/record-audio):
const recordAudio = () =>
new Promise(async resolve => {
// This wants to be secure. It will throw unless served from https:// or localhost.
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
const mediaRecorder = new MediaRecorder(stream);
let audioChunks = [];
mediaRecorder.addEventListener('dataavailable', event => {
audioChunks.push(event.data);
console.log("Got audioChunk!!", event.data.size, event.data.type);
// mediaRecorder.requestData()
});
const start = () => {
audioChunks = [];
mediaRecorder.start(1000); // milliseconds per recorded chunk
};
const stop = () =>
new Promise(resolve => {
mediaRecorder.addEventListener('stop', () => {
const audioBlob = new Blob(audioChunks, { type: 'audio/mpeg' });
const audioUrl = URL.createObjectURL(audioBlob);
const audio = new Audio(audioUrl);
const play = () => audio.play();
resolve({ audioChunks, audioBlob, audioUrl, play });
});
mediaRecorder.stop();
});
resolve({ start, stop });
});
I would like to modify this code to start streaming to nodejs while it's still recording. I understand the header won't be complete until it finished the recording. I can either account for that on nodejs, or perhaps I can live with invalid headers, because I'll be feeding this into ffmpeg on nodejs anyway. How do I do this?
The trick is when you start your recorder, start it like this mediaRecorder.start(timeSlice), where timeSlice is the number of milliseconds the browser waits before emitting a dataavailable event with a blob of data.
Then, in your event handler for dataavailable you call the server:
mediaRecorder.addEventListener('dataavailable', event => {
myHTTPLibrary.post(event.data);
});
That's the general solution. It's not possible to insert an example here, because a code sandbox can't ask you to use your webcam, but I've created one here. It simply sends your data to Request Bin, where you can watch the data stream in.
There are some other things you'll need to think about if you want to stitch the video or audio back together. The blog post touches on that.
I'm trying to know when sound is actually coming out from the user speakers, if that makes any sense.
I have the following:
const stream = new Audio("https://stream1.srvnetplus.com:18122/stream");
const loading = false;
async function play() {
loading = true;
// stream.play() returns a Promise<void>
await stream.play();
loading = false;
}
loading is set to false, but for some reason actual sound comes out of my speakers after 1 — 2 seconds in some ocassions.
This react library (react-audio-player) has an event called onCanPlay. As per the docs state:
Event
Type
Description
onCanPlay
Function
called when enough of the file has been downloaded to be able to start playing. Passed the event.
This makes me think that the await in await stream.play(); is not enough to know when actual audio is being played. Correct me if i'm wrong.
I would like a solution that looks something like this:
const stream = new Audio("https://stream1.srvnetplus.com:18122/stream");
const loading = false;
async function play() {
loading = true;
const res = await stream.play();
await res.ready();
loading = false;
}
Solved my problem with howler.js
import { Howl } from "howler";
const streamUrl = "https://stream1.srvnetplus.com:18122/stream";
const stream = new Howl({
src: [streamUrl],
html5: true,
preload: true,
});
const loading = false;
function play() {
loading = true;
stream.play();
stream.on("play", () => {
loading = false;
});
}
All of the <audio> events could be logged and shown in the example below. The playing event fires after the audio playback begins.
function initPlayback() {
const audio = document.querySelector('audio')
event.target.remove() // remove <button>
audio.hidden = false;
// log all events of <audio>, except timeupdate (too verbose)
['audioprocess', 'canplay', 'canplaythrough', 'complete', 'durationchange',
'emptied', 'ended', 'loadeddata', 'loadedmetadata', 'pause', 'play',
'playing', 'ratechange', 'seeked', 'seeking', 'stalled', 'suspend',
// 'timeupdate'
'volumechange', 'waiting']
.forEach(name => audio.addEventListener(name, event => {
console.log(name)
}))
audio.src = 'https://stream1.srvnetplus.com:18122/stream'
audio.play()
.then(() => console.log('play() resolved'))
}
<audio controls hidden></audio>
<button onclick="initPlayback()" style="padding: 1em 2em;">Play</button>
I am trying to build an Internet Radio platform and I have battled a lot with the problem that is mentioned on the title.
To explain myself further, what I am trying to achieve is, 1) while recording input from the broadcaster's microphone, to mix it with audio from music playback and 2) at the same time be able to lower or raise the volume of the music playback (also realtime through the UI) so that the broadcaster's voice can blend with the music.
This is to imitate a usual radio broadcaster's behavior where music volume lowers when the person wants to speak and raises back again when he finishes talking! The 2nd feature definitely comes after the 1st but I guess mentioning it helps explain both.
To conclude, I have already managed to write code that receives and reproduces microphone input (though it doesn't work perfectly!). At this point I need to know if there is code or libraries that can help me do exactly what I am trying to do. All this is done in hope I won't need to use IceCast etc.
Below is my code for getting microphone input:
// getting microphone input and sending it to our server
var recordedChunks = [];
var mediaRecorder = null;
let slice = 100; // how frequently we capture sound
const slices = 20; // 20 * => after 2 sec
let sendfreq = slice * slices; // how frequently we send it
/* get microphone button handle */
var microphoneButton = document.getElementById('console-toggle-microphone');
microphoneButton.setAttribute('on', 'no');
/* initialise mic streaming capability */
navigator.mediaDevices.getUserMedia({ audio: true, video: false }).then(stream => {
_stream = stream;
})
.catch(function(err) {
show_error('Error: Microphone access has been denied probably!', err);
});
function toggle_mic() {
if (microphoneButton.getAttribute('on') == 'yes')
{
clearInterval();
microphoneButton.setAttribute('on', 'no');
microphoneButton.innerHTML = 'start mic';
}
else if (microphoneButton.getAttribute('on') == 'no')
{
microphoneButton.setAttribute('on', 'yes');
microphoneButton.innerHTML = 'stop mic';
function record_and_send() {
const recorder = new MediaRecorder(_stream);
const chunks = [];
recorder.ondataavailable = e => chunks.push(e.data);
recorder.onstop = e => socket.emit('console-mic-chunks', chunks);
setTimeout(()=> recorder.stop(), sendfreq); // we'll have a 5s media file
recorder.start();
}
// generate a new file every 5s
setInterval(record_and_send, sendfreq);
}
}
Thanks alot!
In case when your audio track from the microphone doesn't need to be synchronized with audio playback (as for me I do not see any reason for this), then you can just play two separate audio instances and change the volume of the one underway (audio playback in your case).
Shortly speaking, you don't have to mix audio tracks and do complex stuff to solve this task.
Draft example:
<input type="range" id="myRange" value="20" oninput="changeVol(this.value)" onchange="changeVol(this.value)">
// Audio playback
const audioPlayback = new Audio();
const audioPlaybackSrc = document.createElement("source");
audioPlaybackSrc.type = "audio/mpeg";
audioPlaybackSrc.src = "path/to/audio.mp3";
audioPlayback.appendChild(audioPlaybackSrc);
audioPlayback.play();
// Change volume for audio playback on the fly
function changeVol(newVolumeValue) {
audioPlayback.volume = newVolumeValue;
}
// Dealing with the microphone
navigator.mediaDevices.getUserMedia({
audio: true
})
.then(stream => {
// Start recording the audio
const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.start();
// While recording, store the audio data chunks
const audioChunks = [];
mediaRecorder.addEventListener("dataavailable", event => {
audioChunks.push(event.data);
});
// Play the audio after stop
mediaRecorder.addEventListener("stop", () => {
const audioBlob = new Blob(audioChunks);
const audioUrl = URL.createObjectURL(audioBlob);
const audio = new Audio(audioUrl);
audio.play();
});
// Stop recording the audio
setTimeout(() => {
mediaRecorder.stop();
}, 3000);
});
Play multiple audio files simultaneously
Change audio volume with JS
How to record and play audio in JavaScript