What is the best way to play sound with delay 50ms or 100ms?
Here is something, what i tried:
var beat = new Audio('/sound/BEAT.wav');
var time = 300;
playbeats();
function playbeats(){
beat.cloneNode().play();
setTimeout(playbeats, time);
}
This is working correctly but my goal is to play BEAT.wav after every 100ms. When I change "time" variable to 100, then it is so "laggy".
721ms is my BEAT.wav (that's why im using cloneNode())
What is alternatives to solve this?
You can use setInterval(), the arguments are the same.
setInterval(function() {
playbeats();
}, 100);
and your function playbeats function should be.
function playbeats(){
var tempBeat=beat.cloneNode();
tempBeat.play();
}
your whole program should be like this.
var beat = new Audio('/sound/BEAT.wav');
setInterval(function() {
playbeats();
}, 100);
function playbeats(){
var tempBeat=beat.cloneNode();
tempBeat.play();
}
You can use the Web Audio API but the code will be a bit different.If you want the Web Audio API's timing and loop capabillities you will need to load the file into a buffer first. It also requires that your code is run on a server. Here is an example:
var audioContext = new AudioContext();
var audioBuffer;
var getSound = new XMLHttpRequest();
getSound.open("get", "sound/BEAT.wav", true);
getSound.responseType = "arraybuffer";
getSound.onload = function() {
audioContext.decodeAudioData(getSound.response, function(buffer) {
audioBuffer = buffer;
});
};
getSound.send();
function playback() {
var playSound = audioContext.createBufferSource();
playSound.buffer = audioBuffer;
playSound.loop = true;
playSound.connect(audioContext.destination);
playSound.start(audioContext.currentTime, 0, 0.3);
}
window.addEventListener("mousedown", playback);
I would also recommend using the Web Audio API. From there, you can simply loop a buffer source node every 100ms or 50ms or whatever time you want.
To do this, as stated in other responses, you'll need to use an XMLHttpRequest to load the sound file via a server
// set up the Web Audio context
var audioCtx = new AudioContext();
// create a new buffer
// 2 channels, 4410 samples (100 ms at 44100 samples/sec), 44100 samples per sec
var buffer = audioCtx.createBuffer(2, 4410, 44100);
// load the sound file via an XMLHttpRequest from a server
var request = new XMLHttpRequest();
request.open('GET', '/sound/BEAT.wav', true);
request.responseType = 'arraybuffer';
request.onload = function () {
var audioData = request.response;
audioCtx.decodeAudioData(audioData, function (newBuffer) {
buffer = newBuffer;
});
}
request.send();
Now you can make a Buffer Source Node to loop the playback
// create the buffer source
var bufferSource = audioCtx.createBufferSource();
// set the buffer we want to use
bufferSource.buffer = buffer;
// set the buffer source node to loop
bufferSource.loop = true;
// specify the loop points in seconds (0.1s = 100ms)
// this is a little redundant since we already set our buffer to be 100ms
// so by default it would loop when the buffer comes to an end (at 100ms)
bufferSource.loopStart = 0;
bufferSource.loopEnd = 0.1;
// connect the buffer source to the Web Audio sound output
bufferSource.connect(audioCtx.destination);
// play!
bufferSource.start();
Note that if you stop the playback via bufferSource.stop(), you will not be able to start it again. You can only call start() once, so you'll need to create a new source node if you want to start playback again.
Note that because of the way the sound file is loaded via an XMLHttpRequest, if you try to test this on your machine without running a server, you'll get a cross-reference request error on most browsers. So the simplest way to get around this if you want to test this on your machine is to run a Python SimpleHTTPServer
Related
I can't seem to adjust the volume on this audio element i have when loading the page. here is the code
var bleep = new Audio();
bleep.src = "Projectwebcrow2.mp3";
bleep.volume = 0.1;
If you are using the audio tags, just get the DOM Node in Javascript and manipulate the volume property.
var audio = document.querySelector('audio');
// Getting
console.log(volume); // 1
// Setting
audio.volume = 0.5; // Reduce the Volume by Half
The number that you set should be in the range 0.0 to 1.0, where 0.0 is the quietest and 1.0 is the loudest.
Note: If the value you set is not in the range 0.0 to 1.0, then JS will throw an IndexSizeError.
FOR WEB AUDIO API,
A bit of code first, where we’ll load our music file and play it using the Web Audio API.
var ctx = new webkitAudioContext();
function loadMusic(url) {
var req = new XMLHttpRequest();
req.open('GET', url, true);
req.responseType = 'arraybuffer';
req.onload = function() {
ctx.decodeAudioData(req.response, playSound);
};
req.send();
}
function playSound(buffer) {
var src = ctx.createBufferSource();
src.buffer = buffer;
src.connect(ctx.destination);
// Play now!
src.noteOn(0);
}
It should work. You need to give us more details(I can't comment). If you didn't already, just add
bleep.play()
Also, autoplaying audio is disabled by default in most browsers, maybe that is the cause.
If your music is an element from the page, you can use:
var music = document.getElementById("myMusic");
music.volume = 0.2;
If it isn't, use:
var music = new Audio('audio/correct.mp3');
music.volume = 0.2;
You can check https://www.w3schools.com/tags/av_prop_volume.asp
For more details
Ok..I was going to suggest you create a function for it and call the function. Since it has worked out, best of luck.
I'm working on a project which requires the ability to stream audio from a webpage to other clients. I'm already using websocket and would like to channel the data there.
My current approach uses Media Recorder, but there is a problem with sampling which causes interrupts. It registers 1s audio and then send's it to the server which relays it to other clients. Is there a way to capture a continuous audio stream and transform it to base64?
Maybe if there is a way to create a base64 audio from MediaStream without delay it would solve the problem. What do you think?
I would like to keep using websockets, I know there is webrtc.
Have you ever done something like this, is this doable?
--> Device 1
MediaStream -> MediaRecorder -> base64 -> WebSocket -> Server --> Device ..
--> Device 18
Here a demo of the current approach... you can try it here: https://jsfiddle.net/8qhvrcbz/
var sendAudio = function(b64) {
var message = 'var audio = document.createElement(\'audio\');';
message += 'audio.src = "' + b64 + '";';
message += 'audio.play().catch(console.error);';
eval(message);
console.log(b64);
}
navigator.mediaDevices.getUserMedia({
audio: true
}).then(function(stream) {
setInterval(function() {
var chunks = [];
var recorder = new MediaRecorder(stream);
recorder.ondataavailable = function(e) {
chunks.push(e.data);
};
recorder.onstop = function(e) {
var audioBlob = new Blob(chunks);
var reader = new FileReader();
reader.readAsDataURL(audioBlob);
reader.onloadend = function() {
var b64 = reader.result
b64 = b64.replace('application/octet-stream', 'audio/mpeg');
sendAudio(b64);
}
}
recorder.start();
setTimeout(function() {
recorder.stop();
}, 1050);
}, 1000);
});
Websocket is not the best. I solved by using WebRTC instead of websocket.
The solution with websocket was obtained while recording 1050ms instead of 1000, it causes a bit of overlay but still better than hearing blanks.
Although you have solved this through WebRTC, which is the industry recommended approach, I'd like to share my answer on this.
The problem here is not websockets in general but rather the MediaRecorder API. Instead of using it one can use PCM audio capture and then submit the captured array buffers into a web worker or WASM for encoding to MP3 chunks or similar.
const context = new AudioContext();
let leftChannel = [];
let rightChannel = [];
let recordingLength = null;
let bufferSize = 512;
let sampleRate = context.sampleRate;
const audioSource = context.createMediaStreamSource(audioStream);
const scriptNode = context.createScriptProcessor(bufferSize, 1, 1);
audioSource.connect(scriptNode);
scriptNode.connect(context.destination);
scriptNode.onaudioprocess = function(e) {
// Do something with the data, e.g. convert it to WAV or MP3
};
Based on my experiments this would give you "real-time" audio. My theory with the MediaRecorder API is that it does some buffering first before emitting out anything that causes the observable delay.
Alright, so I'm trying to determine the intensity (in dB) on samples of an audio file which is recorded by the user's browser.
I have been able to record it and play it through an HTML element.
But when I try to use this element as a source and connect it to an AnalyserNode, AnalyserNode.getFloatFrequencyData always returns an array full of -Infinity, getByteFrequencyData always returns zeroes, getByteTimeDomainData is full of 128.
Here's my code:
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var source;
var analyser = audioCtx.createAnalyser();
var bufferLength = analyser.frequencyBinCount;
var data = new Float32Array(bufferLength);
mediaRecorder.onstop = function(e) {
var blob = new Blob(chunks, { 'type' : 'audio/ogg; codecs=opus' });
chunks = [];
var audioURL = window.URL.createObjectURL(blob);
// audio is an HTML audio element
audio.src = audioURL;
audio.addEventListener("canplaythrough", function() {
source = audioCtx.createMediaElementSource(audio);
source.connect(analyser);
analyser.connect(audioCtx.destination);
analyser.getFloatFrequencyData(data);
console.log(data);
});
}
Any idea why the AnalyserNode behaves like the source is empty/mute? I also tried to put the stream as source while recording, with the same result.
I ran into the same issue, thanks to some of your code snippets, I made it work on my end (the code bellow is typescript and will not work in the browser at the moment of writing):
audioCtx.decodeAudioData(this.result as ArrayBuffer).then(function (buffer: AudioBuffer) {
soundSource = audioCtx.createBufferSource();
soundSource.buffer = buffer;
//soundSource.connect(audioCtx.destination); //I do not need to play the sound
soundSource.connect(analyser);
soundSource.start(0);
setInterval(() => {
calc(); //In here, I will get the analyzed data with analyser.getFloatFrequencyData
}, 300); //This can be changed to 0.
// The interval helps with making sure the buffer has the data
Some explanation (I'm still a beginner when it comes to the Web Audio API, so my explanation might be wrong or incomplete):
An analyzer needs to be able to analyze a specific part of your sound file. In this case I create a AudioBufferSoundNode that contains the buffer that I got from decoding the audio data. I feed the buffer to the source, which eventually will be able to be copied inside the Analyzer. However, without the interval callback, the buffer never seems to be ready and the analysed data contains -Inifinity (which I assume is the absence of any sound, as it has nothing to read) at every index of the array. Which is why the interval is there. It analyses the data every 300ms.
Hope this helps someone!
You need to fetch the audio file and decode the audio buffer.
The url to the audio source must also be on the same domain or have have the correct CORS headers as well (as mentioned by Anthony).
Note: Replace <FILE-URI> with the path to your file in the example below.
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var source;
var analyser = audioCtx.createAnalyser();
var button = document.querySelector('button');
var freqs;
var times;
button.addEventListener('click', (e) => {
fetch("<FILE-URI>", {
headers: new Headers({
"Content-Type" : "audio/mpeg"
})
}).then(function(response){
return response.arrayBuffer()
}).then((ab) => {
audioCtx.decodeAudioData(ab, (buffer) => {
source = audioCtx.createBufferSource();
source.connect(audioCtx.destination)
source.connect(analyser);
source.buffer = buffer;
source.start(0);
viewBufferData();
});
});
});
// Watch the changes in the audio buffer
function viewBufferData() {
setInterval(function(){
freqs = new Uint8Array(analyser.frequencyBinCount);
times = new Uint8Array(analyser.frequencyBinCount);
analyser.smoothingTimeConstant = 0.8;
analyser.fftSize = 2048;
analyser.getByteFrequencyData(freqs);
analyser.getByteTimeDomainData(times);
console.log(freqs)
console.log(times)
}, 1000)
}
If the source file from a different domain? That would fail in createMediaElementSource.
I am trying to schedule the beep sound to play 3x one second apart. However, the sound is only playing once. Any thoughts on why this might be? (It's included within a larger javascript funciton that declares context etc. . .)
var beepBuffer;
var loadBeep = function() {
var getSound = new XMLHttpRequest(); // Load the Sound with XMLHttpRequest
getSound.open("GET", "/static/music/chime.wav", true); // Path to Audio File
getSound.responseType = "arraybuffer"; // Read as Binary Data
getSound.onload = function() {
context.decodeAudioData(getSound.response, function(buffer){
beepBuffer = buffer; // Decode the Audio Data and Store it in a Variable
});
}
getSound.send(); // Send the Request and Load the File
}
var playBeep = function() {
for (var j = 0;j<3;j++) {
var beeper = context.createBufferSource(); // Declare a New Sound
beeper.buffer = beepBuffer; // Attatch our Audio Data as it's Buffer
beeper.connect(context.destination); // Link the Sound to the Output
console.log(j);
beeper.start(j); // Play the Sound Immediately
}
};
Close - and the other answer's code will work - but it's not the synchronicity, it's that you're not asking context.current time to the start time. Start() doesn't take an offset from "now" - it takes an absolute time. Add context.currentTime to the start param, and you should be good.
Your code assumes that beeper.start(j) is a synchronous method, i.e. it waits for the sound to complete playing. This is not the case, so your loop is probably playing all 3 instances at nearly the same exact time.
One solution is to delay the playing of each instance, by passing a time parameter to the start() method:
var numSecondsInBeep = 3;
for (var j = 0; j < 3; j++) {
var beeper = context.createBufferSource(); // Declare a New Sound
beeper.buffer = beepBuffer; // Attatch our Audio Data as it's Buffer
beeper.connect(context.destination); // Link the Sound to the Output
console.log(j);
beeper.start(context.currentTime + j * numSecondsInBeep);
}
See here for more info on the play() API.
I'm creating a piano in the browser using javascript. In order for me to play the same key multiple times simultaneously, instead of just playing the Audio object, I clone it and play the clone, otherwise I'd have to wait for the audio to finish or to restart it, which I don't want.
I've done something like this:
var audioSrc = new Audio('path/');
window.onkeypress = function(event) {
var currentAudioSrc = audioSrc.cloneNode();
currentAudioSrc.play();
}
The problem is, I was checking chrome's inspector, and I noticed that every time I clone the object, the browser download it again
I checked some people who wanted to achieve similar things, and noticed that most of them have the same problem that I do, they redownload the file. The only example I found that can play the same audio source multiple times simultaneously is SoundJs http://www.createjs.com/SoundJS
I tried checking the source could but couldn't figure out how it was done. Any idea?
With the webAudioAPI you could do something like that :
Download once the file via XMLHttpRequest.
Append the response to a buffer
Create a new bufferSource and play it on each call
Fallback to your first implementation if webAudioAPI is not supported (IE)
window.AudioContext = window.AudioContext||window.webkitAudioContext;
if(!window.AudioContext)
yourFirstImplementation();
else{
var buffer,
ctx = new AudioContext(),
gainNode = ctx.createGain();
gainNode.connect(ctx.destination);
var vol = document.querySelector('input');
vol.value = gainNode.gain.value;
vol.addEventListener('change', function(){
gainNode.gain.value = this.value;
}, false);
function createBuffer(){
ctx.decodeAudioData(this.response, function(b) {
buffer = b;
}, function(e){console.warn(e)});
var button = document.querySelector('button');
button.addEventListener('click', function(){playSound(buffer)});
button.className = 'ready';
}
var file = 'https://dl.dropboxusercontent.com/s/agepbh2agnduknz/camera.mp3',
xhr = new XMLHttpRequest();
xhr.onload = createBuffer;
xhr.open('GET', file, true);
xhr.responseType = 'arraybuffer';
xhr.send();
function playSound(buf){
var source = ctx.createBufferSource();
source.buffer = buf;
source.connect(gainNode);
source.onended = function(){if(this.stop)this.stop(); if(this.disconnect)this.disconnect();}
source.start(0);
}
}
function yourFirstImplementation(){
alert('webAudioAPI is not supported by your browser');
}
button{opacity: .2;}
button.ready{opacity: 1};
<button>play</button>
<input type="range" max="5" step=".01" title="volume"/>
cloneNode have one boolean argument:
var dupNode = node.cloneNode(deep);
/*
node
The node to be cloned.
dupNode
The new node that will be a clone of node
deep(Optional)
true if the children of the node should also be cloned, or false to clone only the specified node.
*/
Also note from MDN:
Deep is an optional argument. If omitted, the method acts as if the
value of deep was true, defaulting to using deep cloning as the
default behavior. To create a shallow clone, deep must be set to
false.
This behavior has been changed in the latest spec, and if omitted, the
method will act as if the value of deep was false. Though It's still
optional, you should always provide the deep argument both for
backward and forward compatibility
So, try to use deep = false to prevent download resource:
var audioSrc = new Audio('path/');
window.onkeypress = function(event) {
var currentAudioSrc = audioSrc.cloneNode(false);
currentAudioSrc.play();
}
Load it manually and assign a Blob URL of the binary data to src:
<audio id="audioEl" data-src="audio.mp3"></audio>
var xhr = new XMLHttpRequest();
xhr.open('GET', audioEl.dataset.src);
xhr.responseType = 'blob';
xhr.onload = () => {
audioEl.src = URL.createObjectURL(xhr.response);
};
xhr.send();
This way when you clone it, only the reference to the in-memory binary data is cloned.