I'm trying to know when sound is actually coming out from the user speakers, if that makes any sense.
I have the following:
const stream = new Audio("https://stream1.srvnetplus.com:18122/stream");
const loading = false;
async function play() {
loading = true;
// stream.play() returns a Promise<void>
await stream.play();
loading = false;
}
loading is set to false, but for some reason actual sound comes out of my speakers after 1 — 2 seconds in some ocassions.
This react library (react-audio-player) has an event called onCanPlay. As per the docs state:
Event
Type
Description
onCanPlay
Function
called when enough of the file has been downloaded to be able to start playing. Passed the event.
This makes me think that the await in await stream.play(); is not enough to know when actual audio is being played. Correct me if i'm wrong.
I would like a solution that looks something like this:
const stream = new Audio("https://stream1.srvnetplus.com:18122/stream");
const loading = false;
async function play() {
loading = true;
const res = await stream.play();
await res.ready();
loading = false;
}
Solved my problem with howler.js
import { Howl } from "howler";
const streamUrl = "https://stream1.srvnetplus.com:18122/stream";
const stream = new Howl({
src: [streamUrl],
html5: true,
preload: true,
});
const loading = false;
function play() {
loading = true;
stream.play();
stream.on("play", () => {
loading = false;
});
}
All of the <audio> events could be logged and shown in the example below. The playing event fires after the audio playback begins.
function initPlayback() {
const audio = document.querySelector('audio')
event.target.remove() // remove <button>
audio.hidden = false;
// log all events of <audio>, except timeupdate (too verbose)
['audioprocess', 'canplay', 'canplaythrough', 'complete', 'durationchange',
'emptied', 'ended', 'loadeddata', 'loadedmetadata', 'pause', 'play',
'playing', 'ratechange', 'seeked', 'seeking', 'stalled', 'suspend',
// 'timeupdate'
'volumechange', 'waiting']
.forEach(name => audio.addEventListener(name, event => {
console.log(name)
}))
audio.src = 'https://stream1.srvnetplus.com:18122/stream'
audio.play()
.then(() => console.log('play() resolved'))
}
<audio controls hidden></audio>
<button onclick="initPlayback()" style="padding: 1em 2em;">Play</button>
Related
The following javascript code records a sound and generates blob with audio every 0,5 second.
After recording has stopped the program plays 1-st blob - data[0].
I need the audio player to fire event after data[0] has played, and event handler will deliver the next portion to the audio player - data[1] (далее - data[2], data[3] etc.).
How can I modify the code and which objects should I use to do this ?
I know that I could pass all data[] array to the audio player, but I need a mechanism allowing the audio player to request next portions using events.
navigator.mediaDevices.getUserMedia({audio:true})
.then(function onSuccess(stream) {
const recorder = new MediaRecorder(stream);
const data = [];
recorder.ondataavailable = (e) => {
data.push(e.data);
};
recorder.start(500); // willfire 'dataavailable ' event every 0,5 second
recorder.onstop = (e) => {
const audio = document.createElement('audio');
audio.src = window.URL.createObjectURL(new Blob( data[0] ));
}
setTimeout(() => {
rec.stop();
}, 5000);
})
.catch(function onError(error) {
console.log(error.message);
});
I guess that's what your looking for ?
navigator.mediaDevices
.getUserMedia({ audio: true })
.then(function onSuccess(stream) {
// create the audio stream
const audio = document.createElement('audio');
audio.srcObject = stream; // Pass the audio stream
audio.controls = true;
audio.play();
document.body.appendChild(audio);
const recorder = new MediaRecorder(stream);
const data = [];
// Set event listener
// ondataavailable will fire when you request stop(), requestData() or after all timeSlice you give to the start function.
recorder.ondataavailable = e => data.push(e.data);
// Start recording
// Will generate blob every 500ms
recorder.start(500);
})
.catch(function onError(error) {
console.log(error.message);
});
You had some mistakes to correct :
When recorder call start event wich timeslice parameters, that will not fire the ondataavailable event. You need to stop the recorder to fire the event and create the blob.
You make a mistake on the recorder name's variable and the time on the settimeout function.
You recreate a audio player all times the recorder stop and never append it on the DOM.
I am using MediaStream Recording API to record audio in the browser, like this (courtesy https://github.com/bryanjenningz/record-audio):
const recordAudio = () =>
new Promise(async resolve => {
// This wants to be secure. It will throw unless served from https:// or localhost.
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
const mediaRecorder = new MediaRecorder(stream);
let audioChunks = [];
mediaRecorder.addEventListener('dataavailable', event => {
audioChunks.push(event.data);
console.log("Got audioChunk!!", event.data.size, event.data.type);
// mediaRecorder.requestData()
});
const start = () => {
audioChunks = [];
mediaRecorder.start(1000); // milliseconds per recorded chunk
};
const stop = () =>
new Promise(resolve => {
mediaRecorder.addEventListener('stop', () => {
const audioBlob = new Blob(audioChunks, { type: 'audio/mpeg' });
const audioUrl = URL.createObjectURL(audioBlob);
const audio = new Audio(audioUrl);
const play = () => audio.play();
resolve({ audioChunks, audioBlob, audioUrl, play });
});
mediaRecorder.stop();
});
resolve({ start, stop });
});
I would like to modify this code to start streaming to nodejs while it's still recording. I understand the header won't be complete until it finished the recording. I can either account for that on nodejs, or perhaps I can live with invalid headers, because I'll be feeding this into ffmpeg on nodejs anyway. How do I do this?
The trick is when you start your recorder, start it like this mediaRecorder.start(timeSlice), where timeSlice is the number of milliseconds the browser waits before emitting a dataavailable event with a blob of data.
Then, in your event handler for dataavailable you call the server:
mediaRecorder.addEventListener('dataavailable', event => {
myHTTPLibrary.post(event.data);
});
That's the general solution. It's not possible to insert an example here, because a code sandbox can't ask you to use your webcam, but I've created one here. It simply sends your data to Request Bin, where you can watch the data stream in.
There are some other things you'll need to think about if you want to stitch the video or audio back together. The blog post touches on that.
I am trying to build an Internet Radio platform and I have battled a lot with the problem that is mentioned on the title.
To explain myself further, what I am trying to achieve is, 1) while recording input from the broadcaster's microphone, to mix it with audio from music playback and 2) at the same time be able to lower or raise the volume of the music playback (also realtime through the UI) so that the broadcaster's voice can blend with the music.
This is to imitate a usual radio broadcaster's behavior where music volume lowers when the person wants to speak and raises back again when he finishes talking! The 2nd feature definitely comes after the 1st but I guess mentioning it helps explain both.
To conclude, I have already managed to write code that receives and reproduces microphone input (though it doesn't work perfectly!). At this point I need to know if there is code or libraries that can help me do exactly what I am trying to do. All this is done in hope I won't need to use IceCast etc.
Below is my code for getting microphone input:
// getting microphone input and sending it to our server
var recordedChunks = [];
var mediaRecorder = null;
let slice = 100; // how frequently we capture sound
const slices = 20; // 20 * => after 2 sec
let sendfreq = slice * slices; // how frequently we send it
/* get microphone button handle */
var microphoneButton = document.getElementById('console-toggle-microphone');
microphoneButton.setAttribute('on', 'no');
/* initialise mic streaming capability */
navigator.mediaDevices.getUserMedia({ audio: true, video: false }).then(stream => {
_stream = stream;
})
.catch(function(err) {
show_error('Error: Microphone access has been denied probably!', err);
});
function toggle_mic() {
if (microphoneButton.getAttribute('on') == 'yes')
{
clearInterval();
microphoneButton.setAttribute('on', 'no');
microphoneButton.innerHTML = 'start mic';
}
else if (microphoneButton.getAttribute('on') == 'no')
{
microphoneButton.setAttribute('on', 'yes');
microphoneButton.innerHTML = 'stop mic';
function record_and_send() {
const recorder = new MediaRecorder(_stream);
const chunks = [];
recorder.ondataavailable = e => chunks.push(e.data);
recorder.onstop = e => socket.emit('console-mic-chunks', chunks);
setTimeout(()=> recorder.stop(), sendfreq); // we'll have a 5s media file
recorder.start();
}
// generate a new file every 5s
setInterval(record_and_send, sendfreq);
}
}
Thanks alot!
In case when your audio track from the microphone doesn't need to be synchronized with audio playback (as for me I do not see any reason for this), then you can just play two separate audio instances and change the volume of the one underway (audio playback in your case).
Shortly speaking, you don't have to mix audio tracks and do complex stuff to solve this task.
Draft example:
<input type="range" id="myRange" value="20" oninput="changeVol(this.value)" onchange="changeVol(this.value)">
// Audio playback
const audioPlayback = new Audio();
const audioPlaybackSrc = document.createElement("source");
audioPlaybackSrc.type = "audio/mpeg";
audioPlaybackSrc.src = "path/to/audio.mp3";
audioPlayback.appendChild(audioPlaybackSrc);
audioPlayback.play();
// Change volume for audio playback on the fly
function changeVol(newVolumeValue) {
audioPlayback.volume = newVolumeValue;
}
// Dealing with the microphone
navigator.mediaDevices.getUserMedia({
audio: true
})
.then(stream => {
// Start recording the audio
const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.start();
// While recording, store the audio data chunks
const audioChunks = [];
mediaRecorder.addEventListener("dataavailable", event => {
audioChunks.push(event.data);
});
// Play the audio after stop
mediaRecorder.addEventListener("stop", () => {
const audioBlob = new Blob(audioChunks);
const audioUrl = URL.createObjectURL(audioBlob);
const audio = new Audio(audioUrl);
audio.play();
});
// Stop recording the audio
setTimeout(() => {
mediaRecorder.stop();
}, 3000);
});
Play multiple audio files simultaneously
Change audio volume with JS
How to record and play audio in JavaScript
I’ve created a minimal WebRTC test site that is able to request the user’s webcam/audio stream, to record it, and to playback the recording after it has been stopped.
Demo: https://output.jsbin.com/tabosipefo/
Edit1: https://jsbin.com/tabosipefo/edit?html,console,output
Since this happens all within one Promise navigator.mediaDevices.getUserMedia(), I was wondering, if it is actually possible to detect and on-going stream and to (a) record it, and (b) to stop and save it.
1 WebRTC does not work in jsbin when in edit view for some reason...
If you use no framework and want to use vanilla JS, your best step is to tack the stream object to the global window.
Preview stream
const showWebcamStream = () => {
navigator.mediaDevices
.getUserMedia({ audio: true, video: true })
.then(stream => {
window.localStream = stream; // ⭠ tack it to the window object
// grab the <video> object
const video = document.querySelector("#video-preview");
video.srcObject = stream;
// Display stream
video.onloadedmetadata = () => video.play();
})
.catch(err => console.log(err.name, err.message));
};
Now the video will be displayed within the video element (id: #videp-preview).
Stop Stream(s)
const hideWebcamStream = () => localStream.getTracks().forEach(track => track.stop());
You should put the mediaRecorder in the window object in order to stop it later.
Record Stream
const startWebcamRecorder = () => {
// check if localStream is in window and if it is active
if ("localStream" in window && localStream.active) {
// save the mediaRecorder also to Window in order independently stop it
window.mediaRecorder = new MediaRecorder(localStream);
window.dataChunks = [];
mediaRecorder.start();
console.log(mediaRecorder.state);
mediaRecorder.ondataavailable = e => dataChunks.push(e.data);
}
};
Stop Recording and Preview the recording
You need another video element to playback your recording #video-playback
const stopWebcamRecorder = () => {
if ("mediaRecorder" in window && mediaRecorder.state === "recording") {
mediaRecorder.stop();
console.log(mediaRecorder.state);
mediaRecorder.onstop = () => {
let blob = new Blob(dataChunks, { type: "video/mp4;" });
dataChunks = [];
let videoURL = window.URL.createObjectURL(blob);
const videoPlayback = document.getElementById("video-playback");
videoPlayback.src = videoURL;
};
}
};
I have one video of duration 9200 ms, and a canvas displaying user's webcam video. I'm aiming to record the webcam video while the original video plays to create an output blob of the exact same duration with MediaRecorder but seem to always get a video with longer length (typically around 9400ms).
I've found that if I take the difference in durations and skip ahead in the output video by this amount it will basically sync up with the original video, but I'm hoping to not have to use this hack. Knowing this, I assumed the difference was because HTML5 video's play() function is asynchronous, but even calling recorder.start() inside a .then() after the play() promise still results in an output blob with longer duration.
I start() the MediaRecorder after play()ing the original video, and call stop() inside a requestAnimationFrame loop when I see that the original video has ended. Changing the MediaRecorder.start() to begin in the requestAnimationFrame loop only after checking the original video is playing also results in a longer output blob.
What might be the reason for the longer output? From the documentation it doesn't appear that MediaRecorder's start or stop functions are asynchronous, so is there some way to guarantee an exact starting time with HTML5 video and MediaRecorder?
Yes start() and stop() are async, that's why we have onstart and onstop events firing:
const stream = makeEmptyStream();
const rec = new MediaRecorder(stream);
rec.onstart = (evt) => { console.log( "took %sms to start", performance.now() - begin ); };
const begin = performance.now();
rec.start();
setTimeout( () => {
rec.onstop = (evt) => { console.log( "took %sms to stop", performance.now() - begin ); };
const begin = performance.now();
rec.stop();
}, 1000 );
function makeEmptyStream() {
const canvas = document.createElement('canvas');
canvas.getContext('2d').fillRect(0,0,1,1);
return canvas.captureStream();
}
You can thus try to pause your video after it's been readied to play, then wait your recorder starts before starting again the playback of the video.
However, given everything in both the HTMLMediaElement and MediaRecorder is async, there is no way to get a perfect 1 to 1 relation...
const vid = document.querySelector('video');
onclick = (evt) => {
onclick = null;
vid.play().then( () => {
// pause immediately the main video
vid.pause();
// we may have advanced of a few µs already, so go back to beginning
vid.currentTime = 0;
// only when we're back to beginning
vid.onseeked = (evt) => {
console.log( 'recording will begin shortly, please wait until the end of the video' );
console.log( 'original %ss', vid.duration );
const stream = vid.captureStream ? vid.captureStream() : vid.mozCaptureStream();
const chunks = [];
const rec = new MediaRecorder( stream );
rec.ondataavailable = (evt) => {
chunks.push( evt.data );
};
rec.onstop = (evt) => {
logVideoDuration( new Blob( chunks ), "recorded %ss" );
};
vid.onended = (evt) => {
rec.stop();
};
// wait until the recorder is ready before playing the video again
rec.onstart = (evt) => {
vid.play();
};
rec.start();
};
} );
function logVideoDuration( blob, name ) {
const el = document.createElement('video');
el.src = URL.createObjectURL( blob );
el.play().then( () => {
el.pause();
el.onseeked = (evt) => console.log( name, el.duration );
el.currentTime = 10e25;
} );
}
};
video { pointer-events: none; width: 100% }
click to start<br>
<video src="https://upload.wikimedia.org/wikipedia/commons/a/a4/BBH_gravitational_lensing_of_gw150914.webm" controls crossorigin></video>
Also note that there might be some discrepancy in the duration declared by your media, the calculated duration of the recorded media, and their actual duration. Indeed, these durations are often only a value hard-coded in the metadata of the files, but given how the MediaRecorder API works, it's hard to do this there, so for instance Chrome will produce files without duration, and the players will try to approximate that duration based on the last point they can seek in the media.