Jest testing HTML Media Element - javascript

I have written a web-app which plays selected mp3 files in order of selection. When it comes to testing I cannot get my jest tests to enter the onended() handler of the HTMLAudioElement.
I have tried to spyOn the play() method on the elements but cannot find a way to set the audio's ended attribute to true.
The code for playing the audio is as follows:
playAudioFiles = () =\> {
const audioFiles = this.getAudioFiles()
const audio = audioFiles[0]
let index = 1
audio.play()
audio.onended = () => {
if (index < audioFiles.length) {
audio.src = audioFiles[index].src
audio.play()
index++
}
}
}
I am spying on the play method of the audio elements as follows:
mockPlay = jest.spyOn(window.HTMLMediaElement.prototype, 'play')
Potentially there is something I could do here to trigger the ended event?

Related

Realtime microphone input mixing with music playback

I am trying to build an Internet Radio platform and I have battled a lot with the problem that is mentioned on the title.
To explain myself further, what I am trying to achieve is, 1) while recording input from the broadcaster's microphone, to mix it with audio from music playback and 2) at the same time be able to lower or raise the volume of the music playback (also realtime through the UI) so that the broadcaster's voice can blend with the music.
This is to imitate a usual radio broadcaster's behavior where music volume lowers when the person wants to speak and raises back again when he finishes talking! The 2nd feature definitely comes after the 1st but I guess mentioning it helps explain both.
To conclude, I have already managed to write code that receives and reproduces microphone input (though it doesn't work perfectly!). At this point I need to know if there is code or libraries that can help me do exactly what I am trying to do. All this is done in hope I won't need to use IceCast etc.
Below is my code for getting microphone input:
// getting microphone input and sending it to our server
var recordedChunks = [];
var mediaRecorder = null;
let slice = 100; // how frequently we capture sound
const slices = 20; // 20 * => after 2 sec
let sendfreq = slice * slices; // how frequently we send it
/* get microphone button handle */
var microphoneButton = document.getElementById('console-toggle-microphone');
microphoneButton.setAttribute('on', 'no');
/* initialise mic streaming capability */
navigator.mediaDevices.getUserMedia({ audio: true, video: false }).then(stream => {
_stream = stream;
})
.catch(function(err) {
show_error('Error: Microphone access has been denied probably!', err);
});
function toggle_mic() {
if (microphoneButton.getAttribute('on') == 'yes')
{
clearInterval();
microphoneButton.setAttribute('on', 'no');
microphoneButton.innerHTML = 'start mic';
}
else if (microphoneButton.getAttribute('on') == 'no')
{
microphoneButton.setAttribute('on', 'yes');
microphoneButton.innerHTML = 'stop mic';
function record_and_send() {
const recorder = new MediaRecorder(_stream);
const chunks = [];
recorder.ondataavailable = e => chunks.push(e.data);
recorder.onstop = e => socket.emit('console-mic-chunks', chunks);
setTimeout(()=> recorder.stop(), sendfreq); // we'll have a 5s media file
recorder.start();
}
// generate a new file every 5s
setInterval(record_and_send, sendfreq);
}
}
Thanks alot!
In case when your audio track from the microphone doesn't need to be synchronized with audio playback (as for me I do not see any reason for this), then you can just play two separate audio instances and change the volume of the one underway (audio playback in your case).
Shortly speaking, you don't have to mix audio tracks and do complex stuff to solve this task.
Draft example:
<input type="range" id="myRange" value="20" oninput="changeVol(this.value)" onchange="changeVol(this.value)">
// Audio playback
const audioPlayback = new Audio();
const audioPlaybackSrc = document.createElement("source");
audioPlaybackSrc.type = "audio/mpeg";
audioPlaybackSrc.src = "path/to/audio.mp3";
audioPlayback.appendChild(audioPlaybackSrc);
audioPlayback.play();
// Change volume for audio playback on the fly
function changeVol(newVolumeValue) {
audioPlayback.volume = newVolumeValue;
}
// Dealing with the microphone
navigator.mediaDevices.getUserMedia({
audio: true
})
.then(stream => {
// Start recording the audio
const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.start();
// While recording, store the audio data chunks
const audioChunks = [];
mediaRecorder.addEventListener("dataavailable", event => {
audioChunks.push(event.data);
});
// Play the audio after stop
mediaRecorder.addEventListener("stop", () => {
const audioBlob = new Blob(audioChunks);
const audioUrl = URL.createObjectURL(audioBlob);
const audio = new Audio(audioUrl);
audio.play();
});
// Stop recording the audio
setTimeout(() => {
mediaRecorder.stop();
}, 3000);
});
Play multiple audio files simultaneously
Change audio volume with JS
How to record and play audio in JavaScript

JavaScript: Use MediaRecorder to record streams from <video> but failed

I'm trying to record parts of the video from a tag, save it for later use. And I found this article: Recording a media element, which described a method by first calling stream = video.captureStream(), then use new MediaRecord(stream) to get a recorder.
I've tested on some demos, the MediaRecorder works fine if stream is from user's device (such as microphone). However, when it comes to media element, my FireFox browser throws an exception: MediaRecorder.start: The MediaStream's isolation properties disallow access from MediaRecorder.
So any idea on how to deal with it?
Browser: Firefox
The page (including the js file) is stored at local.
The src attribute of <video> tag could either be a file from local storage or a url from Internet.
Code snippets:
let chunks = [];
let getCaptureStream = function () {
let stream;
const fps = 0;
if (video.captureStream) {
console.log("use captureStream");
stream = video.captureStream(fps);
} else if (video.mozCaptureStream) {
console.log("use mozCaptureStream");
stream = video.mozCaptureStream(fps);
} else {
console.error('Stream capture is not supported');
stream = null;
}
return stream;
}
video.addEventListener('play', () => {
let stream = getCaptureStream();
const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.onstop = function() {
const newVideo = document.createElement('video');
newVideo.setAttribute('controls', '');
newVideo.controls = true;
const blob = new Blob(chunks);
chunks = [];
const videoURL = window.URL.createObjectURL(blob, { 'type' : 'video/mp4; codecs="avc1.42E01E, mp4a.40.2"' });
newVideo.src = videoURL;
document.body.appendChild(video);
}
mediaRecorder.ondataavailable = function(e) {
chunks.push(e.data);
}
stopButton.onclick = function() {
mediaRecorder.stop()
}
mediaRecorder.start(); // This is the line triggers exception.
});
I found the solution myself.
When I turned to Chrome, it shows that a CORS issue limits me from even playing original video. So I guess it's because the secure strategy that preventing MediaRecorder from accessing MediaStreams. Therefore, I deployed the local files to a local server with instruction on this page.
After that, the MediaRecorder started working. Hope this will help someone in need.
But still, the official document doesn't seem to mention much about isolation properties of media elements. So any idea or further explanation is welcomed.

setSinkId only works with <audio> and not new Audio()

Trying to setSinkId on an audio node. I have noticed setSinkId only works in very specific circumstances and I need clarification. Examples behave similar in latest firefox and chrome.
This works:
index.html
<audio id="audio"></audio>
app.js
this.audio = document.getElementById('audio');
this.audio.src =... and .play()
this.audio.setSinkId(this.deviceId);
This is not OK beyond testing as now every player will be sharing a node. They each need a unique one.
This does not:
app.js
this.audio = new Audio();
this.audio.src =... and .play()
this.audio.setSinkId(this.deviceId)
This also doesn't work
app.js
this.audio = document.createElement('audio');
document.body.appendChild(this.audio);
this.audio.src =... and .play()
this.audio.setSinkId(this.deviceId)
Are there differences between new Audio, createElement, and audio present in HTML?
Why doesn't setSinkId work on a new Audio()?
For me it works both in Chrome and in Firefox (after I changed the prefs), for as long as I do request the permissions before-hand, and intialize the audio through an user-gesture.
For requesting access to the audio-output will become as easy as calling navigator.mediaDevices.selectAudioOutput() and let the user choose what they want, but until this is implemented, we still have to hack-around by requesting a media input instead, as explained in this answer.
So this script should do it:
btn.onclick = async (evt) => {
// request device access the bad way,
// until we get a proper mediaDevices.selectAudioOutput
(await navigator.mediaDevices.getUserMedia({audio:true}))
.getTracks().forEach( (track) => track.stop());
const devices = await navigator.mediaDevices.enumerateDevices();
const audio_outputs = devices.filter( (device) => device.kind === "audiooutput" );
const sel = document.createElement("select");
audio_outputs.forEach( (device, i) => sel.append(new Option( device.label || `device ${i}`, device.deviceId )) );
document.body.append(sel);
btn.textContent = "play audio in selected output";
btn.onclick = (evt) => {
const aud = new Audio(AUDIO_SRC);
aud.setSinkId(sel.value);
aud.play();
};
}
But since StackSnippets don't allow gUM calls, we have to outsource it, so here is a live fiddle.
It looks like when you create a new audio it takes some time to play media and be ready to use setSinkId. My solution (I don't like it too much, but is what I have right now) is to use a setTimeout function with one second or less. Something like this:
Replace
this.audio.setSinkId(this.deviceId)
By
setTimeout(() => {
this.audio.setSinkId(this.deviceId)
}, 1000);

Assigning different streams to `srcObject` of a video element

I have a video element. I have multiple streams captured by navigator.getUserMedia.
I can assign srcObject successfully the first time:
previewVideoElement.srcObject = stream;
However if I re-assign a different stream to srcObject later (same element) then the stream doesn't work (no errors, blank video). How can I do this without recreating video elements each time?
Edit: trying this fails as well:
const previewVideoElement = document.getElementById("new-device-preview");
previewVideoElement.pause();
previewVideoElement.srcObject = stream;
previewVideoElement.play();
Edit: calling this works a few times, but then fails with The play() request was interrupted by a call to pause(). Without pause I get The play() request was interrupted by a new load request..
previewVideoElement.pause();
previewVideoElement.srcObject = stream;
previewVideoElement.load();
previewVideoElement.play();
Edit: even this heap of garbage doesn't work:
const previewVideoElement = document.getElementById("new-device-preview");
//previewVideoElement.pause();
previewVideoElement.srcObject = stream;
previewVideoElement.load();
const isPlaying = previewVideoElement.currentTime > 0 && !previewVideoElement.paused && !previewVideoElement.ended && previewVideoElement.readyState > 2;
if (!isPlaying)
setTimeout(function () {
previewVideoElement.play();
}, 500);
The only thing I could get working reliably:
var previewVideoElement = document.getElementById("new-device-preview");
if (previewVideoElement.srcObject) {
$("#new-device-preview-container").empty();
$("#new-device-preview-container").html('<video autoplay class="new-device-preview" id="new-device-preview"></video>')
}
previewVideoElement = document.getElementById("new-device-preview");
previewVideoElement.srcObject = stream;

Optimize audio for iOS Web App

I'm currently developing and testing a game for iOS using Javascript with the Cordova framework. I'm attempting to add sound effects when certain nodes are touched. Since nodes can be touched repeatedly at any rate. I'm using...
var snd = new Audio("audio/note_"+currentChain.length+".mp3");
snd.play();
Which works for what I need but when I enable these effects I find that the game lags. I'm working with mp3 files that have been shrunken down to about 16kb in size and even still the lag is substantial.
What's is the best way to optimize sound in my situation? Am I limited on quality because the application is not native?
Thanks!
It would be the best option to preload them and have them ready when needed. I just wrote up a quick self-contained closure that I think will show you most of what you'd like to know how to do.
var playSound = (function () {
var sounds = 15, // idk how many mp3 files you have
loaded = new Array(sounds),
current = -1,
chain = [],
audio,
i;
function incrementLoaded(index) {
loaded[index] = true;
}
// preloads the sound data
for(i = 0; i < sounds; i++) {
audio = new Audio();
audio.addEventListener('canplay', incrementLoaded.bind(audio, i));
audio.src = 'audio/note_' + i + '.mp3';
chain.push(audio);
}
// this will play audio only when ready and in sequential order automatically
// or "which" index, if supplied
return function (which) {
if(typeof which === 'number') {
current = which;
} else {
current = (current + 1) % sounds;
}
// only play if loaded
if(loaded[current]) {
chain[current].pause();
chain[current].currentTime = 0;
chain[current].play();
}
};
}());
// would play sounds in order every second
setInterval(function () {
playSound();
}, 1000);
If you are using multiple files, I'd suggest you to change that to a single file, using the idea of sound sprites. This link has more tdetails about it: http://www.ibm.com/developerworks/library/wa-ioshtml5/
From my own experience, try increasing the file bitrate if you are not getting the sound to play exactly where you want it to, ref: http://pupunzi.open-lab.com/2013/03/13/making-html5-audio-actually-work-on-mobile/

Categories