I am trying to make something where sound samples are chosen randomly at intervals so that the song evolves and is different each time it's listened to. HTML Audio was not sufficient for this, because the timing was imprecise, so I am experimenting with Web Audio, but it seems quite complicated. For now, I just want to know how to make a new audio file play at 16 seconds exactly, or 32 seconds, etc. I came across something like this
playSound.start(audioContext.currentTime + numb);
But as of now I cannot make it work.
var audioContext = new audioContextCheck();
function audioFileLoader(fileDirectory) {
var soundObj = {};
soundObj.fileDirectory = fileDirectory;
var getSound = new XMLHttpRequest();
getSound.open("GET", soundObj.fileDirectory, true);
getSound.responseType = "arraybuffer";
getSound.onload = function() {
audioContext.decodeAudioData(getSound.response, function(buffer) {
soundObj.soundToPlay = buffer;
});
}
getSound.send();
soundObj.play = function(volumeVal, pitchVal) {
var volume = audioContext.createGain();
volume.gain.value = volumeVal;
var playSound = audioContext.createBufferSource();
playSound.playbackRate.value = pitchVal;
playSound.buffer = soundObj.soundToPlay;
playSound.connect(volume);
volume.connect(audioContext.destination)
playSound.start(audioContext.currentTime)
}
return soundObj;
};
var harp1 = audioFileLoader("IRELAND/harp1.ogg");
var harp2 = audioFileLoader("IRELAND/harp2.ogg");
function keyPressed() {
harp1.play(.5, 2);
harp2.start(audioContext.currentTime + 7.5);
}
window.addEventListener("keydown", keyPressed, false);
You see I am trying to make harp2.ogg play immediately when harp1.ogg finishes. Eventually I want to be able to choose the next file randomly, but for now I just need to know how to make it happen at all. How can I make harp2.ogg play exactly at 7.5 seconds after harp1.ogg begins (or better yet, is there a way to trigger it when harp2 ends (without a gap in audio)?) Help appreciated, thanks!
WebAudio should be able to start audio very precisely using start(time), down to the nearest sample time. If it doesn't, it's because the audio data from decodeAudioData doesn't contain the data you expected, or it's a bug in your browser.
Looks like when you call keyPressed, you want to trigger both songs to start playing. One immediately, and the other in 7.5 seconds.
The function to play the songs is soundObj.play and it needs to take an additional argument, which is the audioContext time to play the song. Something like: soundObj.play = function(volumeVal, pitchVal, startTime) {...}
The function keyPressed() block should look something like this:
harp1.play(.5, 2, 0);
harp2.start(1, 1, audioContext.currentTime + 7.5);
audioContext.resume();
audioContext.resume() starts the actual audio (or rather starts the audio graph timing so it does things you've scheduled)
Related
I am making game in browser and use sound effects for example shot, explosion and for every generated instance of classes there is also creating new Audio object which is eating memory so much and app is crashing after 2/3 minutes thats mean is getting very slow. Is any better way to do this? Maybe creating new Audio() in another place but just once and call it when need, not every time when generating new enemy, bullet etc.
For example:
class Bullet extends Common {
constructor() {
this.element = document.createElement("div");
this.audio = new Audio("./audio/LaserShot.wav");
}
And in upper class Spaceship I call it every time I shot pressing space:
executeShot() {
const bullet = new Bullet(this.getCurrentPosition(), this.element.offsetTop, this.area);
bullet.init();
this.bullets.push(bullet);
}
Not sure if this works great in all scenario, but you can try the following code, and see if it works.
<button class="btn">Click</button>
class AudioService {
constructor(initialsetup = 1) {
this._audios = [];
for (let i = 0; i < initialsetup; i++) {
this._audios.push(new Audio());
}
}
/**
* use to get available audio
*/
_getAudioElemToPlay() {
const audios = this._audios.filter(audio => {
// if the audio is empty, without a valid url or if the audio is ended
// TODO: not sure if these attributes are more than enough
return !audio.duration || audio.ended;
});
console.log('audios', audios);
if (audios.length == 0) {
const audio = new Audio();
this._audios.push(audio);
return audio;
}
return audios[0];
}
playAudio(url) {
const player = this._getAudioElemToPlay();
player.src = url;
player.load();
player.play();
}
}
const audioService = new AudioService();
let index = 0;
document.querySelector('.btn').addEventListener('click', function() {
index++;
const audioList = new Array(12).fill(0).map((value, index) => {
return `https://www.soundhelix.com/examples/mp3/SoundHelix-Song-${index}.mp3`;
});
audioService.playAudio(audioList[index % audioList.length]);
})
Here is the link to run the above code, https://codepen.io/hphchan/pen/xxqbezb.
You may also change the audio to other audio as you like.
My main idea to solve the issue, is by reusing the audio element created, by having an array to store it, and reuse the element once it finish playing.
Of course, for the demo, I am playing the audio by using a click button. But definitely, you can plug it into your game.
Hope the solution may help you. In case there are any cases not covering, as I have not much exposure to this area, it would be nice if you can post your modified solution here, so we can all learn together.
Have you looked at the Web Audio API? If it works for you, a single AudioBuffer can hold the audio data in memory for a given cue, and you can play it multiple times by spawning AudioBufferSourceNode objects. If you have many different sounds playing, this might not be much help, but if you are reusing sounds continuously (many laser shots), this could a big help. Another benefit is that this way of playing sounds is pretty low latency.
I just used this for the first time, getting it to work yesterday. But I'm loading it with raw PCM data (floats ranging from -1 to 1). There is surely a way to load this or an equivalent in-memory structure with a wav, but I'm too new to the API to know yet how to do this.
I have a Chrome extension in which I'm trying to jump forward or backward (based on a user command) to a specific time in the video by setting the currentTime property of the video object. Before trying to set currentTime, a variety of operations work just fine. For example:
document.getElementsByTagName("video")[1].play(); // works fine
document.getElementsByTagName("video")[1].pause(); // works fine
document.getElementsByTagName("video")[1].muted = true; // works fine
document.getElementsByTagName("video")[1].muted = false; // works fine
BUT as soon as I try to jump to a specific point in the video by doing something like this:
document.getElementsByTagName("video")[1].currentTime = 500; // doesn't work
No errors are thrown, the video pauses, and any attempted actions after this point do nothing. So the items shown above (play/pause/mute/unmute) no longer work after attempting to set currentTime. If I read the value of currentTime after setting it, it correctly displays the new time that I just set it to. Yet nothing I do will make it play, and in fact even trying to make the video play by clicking the built-in toolbar no longer works. So, apparently setting currentTime wreaks all kinds of havoc in the video player. Yet if I reload the video, all works as before as long as I don't try to set currentTime.
I can easily jump to various times (backward or forward) by sliding the slider on the toolbar, so there must be some way internally to do that. Is there some way I can discover what code does a successful time jump? Because it's a Chrome extension I can inject custom js into the executing Hulu js, but I don't know what command I would send.
Any ideas?
Okay I fiddled around with it for a little while to see how I could reproduce the click event on the player and came up with the following solution:
handleViewer = function(){
var thumbnailMarker = $('.thumbnail-marker'),
progressBarTotal = thumbnailMarker.parent(),
controlsBar = $('.controls-bar'),
videoPlayer = $('#content-video-player');
var init = function(){
thumbnailMarker = $('.thumbnail-marker');
progressBarTotal = thumbnailMarker.parent();
controlsBar = $('.controls-bar');
videoPlayer = $('#content-video-player');
},
check = function(){
if(!thumbnailMarker || !thumbnailMarker.length){
init();
}
},
show = function(){
thumbnailMarker.show();
progressBarTotal.show();
controlsBar.show();
},
hide = function(){
controlsBar.hide();
},
getProgressBarWidth = function(){
return progressBarTotal[0].offsetWidth;
};
return {
goToTime: function(time){
var seekPercentage,
duration;
check();
duration = videoPlayer[0].duration;
if(time > 0 && time < duration){
seekPercentage = time/duration;
this.jumpToPercentage(seekPercentage);
}
},
jumpToPercentage: function(percentage){
check();
if(percentage >= 1 && percentage <= 100){
percentage = percentage/100;
}
if(percentage >= 0 && percentage < 1){
show();
thumbnailMarker[0].style.left = (getProgressBarWidth()*percentage)+"px";
thumbnailMarker[0].click();
hide();
}
}
}
}();
Once that code is initialized you can do the following:
handleViewer.goToTime(500);
Alternatively
handleViewer.jumpToPercentage(50);
I've tested this in chrome on a MacBook pro. Let me know if you run into any issues.
Rather than try to find the javascript responsible for changing the time, why not try to simulate the user events that cause the time to change?
Figure out the exact sequence of mouse events that trigger the time change.
This is probably some combination of mouseover, mousedown, mouseup, and click.
Then recreate those events synthetically and dispatch them to the appropriate elements.
This is the approach taken by extensions like Stream Keys and Vimium.
The video should be ready to play before setting the currentTime.
Try adding this line before setting currentTime?
document.getElementsByTagName("video")[1].play();
document.getElementsByTagName("video")[1].currentTime = 500;
Looks like it works if you first pause, then set currentTime, then play again.
document.getElementsByTagName("video")[1].pause()
document.getElementsByTagName("video")[1].currentTime = 800.000000
document.getElementsByTagName("video")[1].play()
Probably would need to hook into some event like onseeked to put in the play command to make it more robust.
I have several long stories which for which the source audio is sentence-by-sentence audio files. I would like to create a web page where one could listen to individual sentences of a particular story many times, or listen to the whole story from start to finish.
To start, my web page has many <audio> elements, each of which is in a <p> with the corresponding text. Each of these <audio> elements corresponds to a single sentence of the story.
I was about to start coding up a javascript object which was going to allow some sort of "play all" functionality, you'd click the button and it would play audio[0], when that finished, audio[1], etc. It would also have a 'pause' button and keep track of where you were, etc. This way the individual sentences could still be played or enjoyed because they each have an audio element. Or, one could use my "play all" buttons at the top to treat the sequence of audio elements as if they were one big element.
Then I started asking myself if there's some better/easier/more canonical solution here. One thing I thought of doing would be to cat all of these individual audio files together into a big audio file and perhaps create 'chapter' <track> elements. Is this preferable?
What are the pros and cons to each approach? Is there some out-of-the-box solution already made for me which I simply haven't turned up?
You could use the ended event to play the next sound when the previous sound completes (Playing HTML 5 Audio sequentially via Javascript):
var sounds = new Array(new Audio("1.mp3"), new Audio("2.mp3"));
var i = -1;
playSnd();
function playSnd() {
i++;
if (i == sounds.length) return;
sounds[i].addEventListener('ended', playSnd);
sounds[i].play();
}
function playAudio(src) {
var audioElement = new Audio(src);
audioElement.play();
return audioElement
}
const sequences = ['./audio/person.m4a', './audio/4.m4a', './audio/5.m4a', './audio/6.m4a', './audio/6.m4a', './audio/6.m4a', './audio/room.m4a', './audio/3.m4a']
// play audio
let index = 0
const audioElement = playAudio(sequences[index])
audioElement.addEventListener('ended', (e) => {
index++
if (index < sequences.length) {
audioElement.src = sequences[index]
audioElement.play();
}
})
I use javascript to solve the problem, using Audio objects and a setInterval. Below an example:
var sndLetItSnow = new Audio("audio/letItSnow.m4a");
var sndSanta = new Audio("audio/snow.m4a");
var playlist = [sndLetItSnow, sndSanta];
var current = null;
var idx = 0;
function playSound() {
if (current === null || current.ended) {
// go to next
current = playlist[idx++];
// check if is the last of playlist and return to first
if (idx >= playlist.length)
idx = 0;
// return to begin
current.currentTime=0;
// play
current.play();
}
}
setInterval(playSound, 1000);
For more documentation on Audio object you can visit this page:
https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement
I hope this help!
So My question is how would you check a video to see if its buffered enough to play but without stopping to buffer again and if this is true then play the video.
OR how would I check if the video has buffered 50% and then if it has play the video.
What I've tried (But it didn't seem right when I looked at the buff amount in the controls it seemed to not of buffered alot)
var Video = document.getElementById("videoPlayer");
Video.oncanplaythrough = function HasBuff() {
alert("Is Buffered");
Video.play();
};
As said before this code didnt seem to have alot of buff when looking in the controls of the video not even 1/4 was buffered, Prehaps What would be better is to check if the video has buffered 50% or so and then play it, though I'm not to sure of how to do this or go about it.
Thank you for reading,
I'm at "beginer level" so Sorry if this seems an easy or silly question but we all have to start somewhere right? :)
Thanks again.
I think you can use this onplaying event
var vid = document.getElementById("myVideo");
vid.onplaying = function() {
alert("The video is now playing");
};
or if you want to check the video is currently buffer you can use onwaiting event more info here http://www.w3schools.com/tags/ref_av_dom.asp
I made a little test:
var vid = document.getElementById("video");
var buffered = function() {
var bufferedPercent =
vid.duration > 0 && vid.buffered.length > 0 ?
vid.buffered.end(0) / vid.duration * 100 :
0;
return 'buffered ' + bufferedPercent.toFixed(0) + '%';
};
vid.onprogress = function() {
console.log('progress: ' + buffered());
};
vid.oncanplay = function() {
console.log('canplay: ' + buffered());
};
vid.oncanplaythrough = function() {
console.log('canplaythrough: ' + buffered());
};
vid.onsuspend = function() {
console.log('suspend: ' + buffered());
};
On Chrome I get output like this:
canplay: buffered 5%
canplaythrough: buffered 5%
progress: buffered 15%
progress: buffered 25%
suspend: buffered 25%
With my test video "buffered" never gets beyond 25% (autoplay disabled). On Firefox in stops at 19%.
So it looks like you cannot force the browser to buffer more than it wants to. I also tried calling load() explicitly, no difference. Maybe if you seek forward it might buffer more, but then it might discard the already buffered part (I didn't try that out). But the browsers seemed to buffer quite a bit more after firing canplaythrough event, and then fired suspend event when they stopped buffering, so maybe you can use that.
In summary, the options I see are:
The ideal way: Trust the browser's estimation on when to start playing (autoplay = true).
The hacky way: Wait for the suspend event and start the video then (autoplay = false). These two browsers seemed to buffer a bit more that way.
Something like this:
vid.onsuspend = function() {
if (vid.paused) {
vid.play();
}
};
I have streaming audio in wave format being played out through html5 capabilities of firefox. Because some of our users complained about choppy audio we decided to come up with a strategy for measuring if the rate of the audio was good enough to sustain playback. We first tried the canplaythrough event, but apparently the browser was too optimistic about it and it did not quite work. Here is some code that I am proposing using jquery. I am a js/jquery beginner so i wanted to see if anyone had any better ideas.
$(document).ready(function() {
var previous_buffer = 0;
var start_time = (new Date()).getTime();
// function to check how fast buffer is
function checkBufferSpeed() {
// if we havent started buffering yet, return
if (this.buffered.length < 1)
{
return;
}
// get the current buffer status
var current_buffer = this.buffered.end(0);
console.log("current_buffer:" + current_buffer);
// if we get the same current buffer twice, browser is done buffering
if (current_buffer > 0 && current_buffer == previous_buffer)
{
this.play();
$("#audio").off("progress");
return;
}
// get the time spent
var time_spent = ((new Date()).getTime() - start_time)/1000;
console.log("time_spent:" + time_spent);
// if we buffered faster than time spent, play
if (current_buffer > time_spent)
{
this.play();
$("#audio").off("progress");
return;
}
previous_buffer = current_buffer
}
$("#audio").on("progress", checkBufferSpeed);
});
also, the reason i am not checking the audio elements duration variable because firefox seems to always report that as infinite, unless the stream is already cached.
also, it seems like firefox always fires 2 progress events even if the stream is fully available in cache which is why i have the check to see if i have the same buffer twice and if so, to just play the sound.
is there a chance that the browser will never fire progress events?
any enhancements I could make to this code to make it better?