I have several long stories which for which the source audio is sentence-by-sentence audio files. I would like to create a web page where one could listen to individual sentences of a particular story many times, or listen to the whole story from start to finish.
To start, my web page has many <audio> elements, each of which is in a <p> with the corresponding text. Each of these <audio> elements corresponds to a single sentence of the story.
I was about to start coding up a javascript object which was going to allow some sort of "play all" functionality, you'd click the button and it would play audio[0], when that finished, audio[1], etc. It would also have a 'pause' button and keep track of where you were, etc. This way the individual sentences could still be played or enjoyed because they each have an audio element. Or, one could use my "play all" buttons at the top to treat the sequence of audio elements as if they were one big element.
Then I started asking myself if there's some better/easier/more canonical solution here. One thing I thought of doing would be to cat all of these individual audio files together into a big audio file and perhaps create 'chapter' <track> elements. Is this preferable?
What are the pros and cons to each approach? Is there some out-of-the-box solution already made for me which I simply haven't turned up?
You could use the ended event to play the next sound when the previous sound completes (Playing HTML 5 Audio sequentially via Javascript):
var sounds = new Array(new Audio("1.mp3"), new Audio("2.mp3"));
var i = -1;
playSnd();
function playSnd() {
i++;
if (i == sounds.length) return;
sounds[i].addEventListener('ended', playSnd);
sounds[i].play();
}
function playAudio(src) {
var audioElement = new Audio(src);
audioElement.play();
return audioElement
}
const sequences = ['./audio/person.m4a', './audio/4.m4a', './audio/5.m4a', './audio/6.m4a', './audio/6.m4a', './audio/6.m4a', './audio/room.m4a', './audio/3.m4a']
// play audio
let index = 0
const audioElement = playAudio(sequences[index])
audioElement.addEventListener('ended', (e) => {
index++
if (index < sequences.length) {
audioElement.src = sequences[index]
audioElement.play();
}
})
I use javascript to solve the problem, using Audio objects and a setInterval. Below an example:
var sndLetItSnow = new Audio("audio/letItSnow.m4a");
var sndSanta = new Audio("audio/snow.m4a");
var playlist = [sndLetItSnow, sndSanta];
var current = null;
var idx = 0;
function playSound() {
if (current === null || current.ended) {
// go to next
current = playlist[idx++];
// check if is the last of playlist and return to first
if (idx >= playlist.length)
idx = 0;
// return to begin
current.currentTime=0;
// play
current.play();
}
}
setInterval(playSound, 1000);
For more documentation on Audio object you can visit this page:
https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement
I hope this help!
Related
I am making game in browser and use sound effects for example shot, explosion and for every generated instance of classes there is also creating new Audio object which is eating memory so much and app is crashing after 2/3 minutes thats mean is getting very slow. Is any better way to do this? Maybe creating new Audio() in another place but just once and call it when need, not every time when generating new enemy, bullet etc.
For example:
class Bullet extends Common {
constructor() {
this.element = document.createElement("div");
this.audio = new Audio("./audio/LaserShot.wav");
}
And in upper class Spaceship I call it every time I shot pressing space:
executeShot() {
const bullet = new Bullet(this.getCurrentPosition(), this.element.offsetTop, this.area);
bullet.init();
this.bullets.push(bullet);
}
Not sure if this works great in all scenario, but you can try the following code, and see if it works.
<button class="btn">Click</button>
class AudioService {
constructor(initialsetup = 1) {
this._audios = [];
for (let i = 0; i < initialsetup; i++) {
this._audios.push(new Audio());
}
}
/**
* use to get available audio
*/
_getAudioElemToPlay() {
const audios = this._audios.filter(audio => {
// if the audio is empty, without a valid url or if the audio is ended
// TODO: not sure if these attributes are more than enough
return !audio.duration || audio.ended;
});
console.log('audios', audios);
if (audios.length == 0) {
const audio = new Audio();
this._audios.push(audio);
return audio;
}
return audios[0];
}
playAudio(url) {
const player = this._getAudioElemToPlay();
player.src = url;
player.load();
player.play();
}
}
const audioService = new AudioService();
let index = 0;
document.querySelector('.btn').addEventListener('click', function() {
index++;
const audioList = new Array(12).fill(0).map((value, index) => {
return `https://www.soundhelix.com/examples/mp3/SoundHelix-Song-${index}.mp3`;
});
audioService.playAudio(audioList[index % audioList.length]);
})
Here is the link to run the above code, https://codepen.io/hphchan/pen/xxqbezb.
You may also change the audio to other audio as you like.
My main idea to solve the issue, is by reusing the audio element created, by having an array to store it, and reuse the element once it finish playing.
Of course, for the demo, I am playing the audio by using a click button. But definitely, you can plug it into your game.
Hope the solution may help you. In case there are any cases not covering, as I have not much exposure to this area, it would be nice if you can post your modified solution here, so we can all learn together.
Have you looked at the Web Audio API? If it works for you, a single AudioBuffer can hold the audio data in memory for a given cue, and you can play it multiple times by spawning AudioBufferSourceNode objects. If you have many different sounds playing, this might not be much help, but if you are reusing sounds continuously (many laser shots), this could a big help. Another benefit is that this way of playing sounds is pretty low latency.
I just used this for the first time, getting it to work yesterday. But I'm loading it with raw PCM data (floats ranging from -1 to 1). There is surely a way to load this or an equivalent in-memory structure with a wav, but I'm too new to the API to know yet how to do this.
I am a neophyte JS developer with a past in server-side programming.
I am creating a simple web app that allows various users to engage in live audio chatting with one another. Whenever a new user logs into an audio chat room, the following ensures they can hear everyone talking
// plays remote streams
async function playStreams(streamList) {
await Promise.all(streamList.map(async (item, index) => {
// add an audio streaming unit, and play it
var audio = document.createElement('audio');
audio.addEventListener("loadeddata", function() {
audio.play();
});
audio.srcObject = item.remoteStream;
audio.id = 'audio-stream-'+item.streamID;
audio.muted = false;
}));
}
Essentially I pass a list of streams into that function and play all of them.
Now if a user leaves the environment, I feel the prudent thing to do is to destroy their <audio> element.
To achieve that, I tried
function stopStreams(streamList) {
streamList.forEach(function (item, index) {
let stream_id = item.streamID;
let audio_elem = document.getElementById('audio-stream-'+stream_id);
if (audio_elem) {
audio_elem.stop();
}
});
}
Unfortunately, audio_elem is always null in the function above. It is not that the streamIDs are mismatched - I have checked them.
Maybe this issue has to do with scoping? I am guessing the <audio> elements created within playStreams are scoped within that function, and thus stopStreams is unable to access them.
I need a domain expert to clarify whether this is actually the case. Moreover, I also need a solution regarding how to better handle this situation - one that cleans up successfully after itself.
p.s. a similar SO question came close to asking the same thing. But their case was not numerous <audio> elements being dynamically created and destroyed as users come and go. I do not know how to use that answer to solve my issue. My concepts are unclear.
I created a global dictionary like so -
const liveStreams = {};
Next, when I play live streams, I save all the <audio> elements in the aforementioned global dictionary -
// plays remote streams
async function playStreams(streamList) {
await Promise.all(streamList.map(async (item, index) => {
// add an audio streaming unit, and play it
var audio = document.createElement('audio');
audio.addEventListener("loadeddata", function() {
audio.play();
});
audio.srcObject = item.remoteStream;
audio.muted = false;
// log the audio object in a global dictionary
liveStreams[stream_id] = audio;
}));
}
I destroy the streams via accessing them from the liveStreams dictionary, like so -
function stopStreams(streamList) {
streamList.forEach(function (item, index) {
let stream_id = item.streamID;
// Check if liveStreams contains the audio element associated to stream_id
if (liveStreams.hasOwnProperty(stream_id)) {
let audio_elem = liveStreams[stream_id];
// Stop the playback
audio_elem.pause();// now the object becomes subject to garbage collection.
// Remove audio obj's ref from dictionary
delete liveStreams.stream_id;
}
});
}
And that does it.
I have loaded an audio file into an object's property. I use it as sprite where each section has a start time and end time specified. Then I use this code to play that particular part of the audio file:
speak: function(str) {
this.vo.currentTime = 0;
var curr = {};
curr = this.sprite[str];
this.vo.currentTime = curr[0];
this.vo.volume = _data.vol[1];
this.vo.play();
var onTimeUpdate = function() {
if (this.currentTime >= curr[1]) {
this.pause();
this.load();
this.currentTime = 0;
}
};
this.vo.addEventListener('timeupdate', onTimeUpdate, false);
}
the "vo" is the audio file loaded as vo = new Audio('..file..')
The "str" is the name of the property which contains the start and end time of that part. example: if i pass aud.speak('hello'); then it plays the part of the sprite where it says "hello".
The problem: Once a part is played, most of the other parts won't play after that. I have tried everything almost. that's why you see .load(), .pause(), .currentTime = 0; as well.
I fixed the problem by reloading the audio file everytime the speak() method is called. Since the file is hosted locally, it is not a problem to reload it again, but it probably won't be the best solution for someone using hosted content.
this.vo = new Audio('..file..');
I added this line at the start of speak() method.
I am trying to make something where sound samples are chosen randomly at intervals so that the song evolves and is different each time it's listened to. HTML Audio was not sufficient for this, because the timing was imprecise, so I am experimenting with Web Audio, but it seems quite complicated. For now, I just want to know how to make a new audio file play at 16 seconds exactly, or 32 seconds, etc. I came across something like this
playSound.start(audioContext.currentTime + numb);
But as of now I cannot make it work.
var audioContext = new audioContextCheck();
function audioFileLoader(fileDirectory) {
var soundObj = {};
soundObj.fileDirectory = fileDirectory;
var getSound = new XMLHttpRequest();
getSound.open("GET", soundObj.fileDirectory, true);
getSound.responseType = "arraybuffer";
getSound.onload = function() {
audioContext.decodeAudioData(getSound.response, function(buffer) {
soundObj.soundToPlay = buffer;
});
}
getSound.send();
soundObj.play = function(volumeVal, pitchVal) {
var volume = audioContext.createGain();
volume.gain.value = volumeVal;
var playSound = audioContext.createBufferSource();
playSound.playbackRate.value = pitchVal;
playSound.buffer = soundObj.soundToPlay;
playSound.connect(volume);
volume.connect(audioContext.destination)
playSound.start(audioContext.currentTime)
}
return soundObj;
};
var harp1 = audioFileLoader("IRELAND/harp1.ogg");
var harp2 = audioFileLoader("IRELAND/harp2.ogg");
function keyPressed() {
harp1.play(.5, 2);
harp2.start(audioContext.currentTime + 7.5);
}
window.addEventListener("keydown", keyPressed, false);
You see I am trying to make harp2.ogg play immediately when harp1.ogg finishes. Eventually I want to be able to choose the next file randomly, but for now I just need to know how to make it happen at all. How can I make harp2.ogg play exactly at 7.5 seconds after harp1.ogg begins (or better yet, is there a way to trigger it when harp2 ends (without a gap in audio)?) Help appreciated, thanks!
WebAudio should be able to start audio very precisely using start(time), down to the nearest sample time. If it doesn't, it's because the audio data from decodeAudioData doesn't contain the data you expected, or it's a bug in your browser.
Looks like when you call keyPressed, you want to trigger both songs to start playing. One immediately, and the other in 7.5 seconds.
The function to play the songs is soundObj.play and it needs to take an additional argument, which is the audioContext time to play the song. Something like: soundObj.play = function(volumeVal, pitchVal, startTime) {...}
The function keyPressed() block should look something like this:
harp1.play(.5, 2, 0);
harp2.start(1, 1, audioContext.currentTime + 7.5);
audioContext.resume();
audioContext.resume() starts the actual audio (or rather starts the audio graph timing so it does things you've scheduled)
I have many small audio files, and I want to play these files one after another, not all of them at the same time. I have used the object Audio in javascript like this
var audio_1 = new Audio();
var audio_2 = new Audio();
var audio_3 = new Audio();
audio_1.src = "/path1";
audio_2.src = "/path2";
audio_3.src = "/path3";
Now I just need to call the function play for every object, but I need to play the audio_1 alone, and play audio_2 when the first one ended.
The solution I found is to test on the property ended of every object
audio_1.ended; // returns true when it ends playing
I found an object onended inside the audio object, I thought it's a function but it's not, can someone help me and give me the best way to solve this problem.
use addEventListener instead of assigning a function to the onended property:
audio.addEventListener('ended', function() {}); // Do
audio.onended = function() {}; // Don't
So, a IMHO dirty way is this:
audio_1.play();
audio_1.addEventListener('ended', function() {
audio_2.play();
audio_2.addEventListener('ended', function() {
audio_3.play();
};
};