I'm experimenting with the Web Audio API and my goal is to create a digital guitar where each string has an initial sound source of an actual guitar playing the string open and then I would like to generate all other fret position sounds dynamically. After some research into the subject (this is all pretty new to me) it sounded like this might be achieved by altering the frequency of the source sound sample.
The problem is I've seen lots of algorithms for altering synthesized sin waves but nothing to alter the frequency of an audio sample. Here is a sample of my code to give a better idea of how i'm trying to implement this:
// Guitar chord buffer
var chordBuffer = null;
// Create audio context
var context = new webkitAudioContext();
// Load sound sample
var request = new XMLHttpRequest();
request.open('GET', 'chord.mp3', true);
request.responseType = 'arraybuffer';
request.onload = loadChord;
request.send();
// Handle guitar string "pluck"
$('.string').mouseenter(function(e){
e.preventDefault();
var source = context.createBufferSource();
source.buffer = chordBuffer;
// Create javaScriptNode so we can get at raw audio buffer
var jsnode = context.createJavaScriptNode(1024, 1, 1);
jsnode.onaudioprocess = changeFrequency;
// Connect nodes and play
source.connect(jsnode);
jsnode.connect(context.destination);
source.noteOn(0);
});
function loadChord() {
context.decodeAudioData(
request.response,
function(pBuffer) { chordBuffer = pBuffer; },
function(pError) { console.error(pError); }
);
}
function changeFrequency(e) {
var ib = e.inputBuffer.getChannelData(0);
var ob = e.outputBuffer.getChannelData(0);
var n = ib.length;
for (var i = 0; i < n; ++i) {
// Code needed...
}
}
So there you have it - I can play the sound just fine but am at a bit of a lose when to comes to creating the code in the changeFrequency function which would change the chord samples frequency so it sounded like another fret position on the string. Any help with this code would be appreciated or opinions on whether what I'm attempting to do is even possible.
Thanks!
playbackRate will change the pitch of the sound, but also its playback time.
If you want to change only the pitch, maybe you can use a pitch shifter. Check my javascript pitch shifter implementation here and its use with a JavascriptNode in this plugin
You can get the desired behavior by setting playbackRate, but as Brad says, you're going to have to use multi-sampling. Also see this SO question: Setting playbackRate on audio element connected to web audio api.
Related
I'm building a wheel of fortune in html+js that spins rather quickly. Every time a new color flies by the mark, the wheel should play a click-sound. At top speed this sounds almost like a machine gun, so a new file starts playing before the old one is finished basically. The file itself is always the same: click.wav
It works fine in Chrome, only in chrome. Firefox has a weird bug, where it only plays the sound, if there is any other audio source active, such as a youtube video playing in a different tab. Edge and Safari kinda safe up the clicks to the end and then play them all simultaniously. It's a mess...
I use the method described here which uses cloning an <audio> tag
I guess this is where the problem is:
var sound = new Audio("sounds/click.wav");
sound.preload = 'auto';
sound.load();
function playsound(){
var click=sound.cloneNode();
click.volume=1;
click.play();
}
Here is a simplified version of my spinning function that just calls the playsound() function several times per second:
function rotateWheel(){
angle = angle + acceleration
while (angle >= 360) {
angle = angle - 360
}
var wheel = document.getElementById("wheel")
wheel.style.transform = "rotate("+angle +"deg)"
// play the click when a new segment rotates by
if(Math.floor(angle/21) != previousSegment){
playsound()
previousSegment = Math.floor(angle/21)
}
You used an answer from here this methods cause at some point to crash the browser process because you either create a memory issue or you fill up the DOM with elements the browser has to handle - so you should re-think your approach AND as you found out it will not work for heavy use in most browsers like safari or FireFox
Looking deeper into the <audio> tag specification, it becomes clear that there are many things that simply can't be done with it, which isn't surprising, since it was designed for media playback.
One of the limitations includes -> No fine-grained timing of sound.
So you have to find another method for what you want we use Web Audio API designed for online video games.
Web Audio API
An AudioContext is for managing and playing all sounds. To produce a sound using the Web Audio API, create one or more sound sources and connect them to the sound destination provided by the AudioContext instance (usually the speaker).
The AudioBuffer
With the Web Audio API, audio files can be played only after they’ve been loaded into a buffer. Loading sounds takes time, so assets that are used in the animation/game should be loaded on page load, at the start of the game or level, or incrementally while the player is playing.
The basic steps
We use an XMLHttpRequest to load data into a buffer from an audio file.
Next, we make an asynchronous callback and send the actual request to load.
Once a sound has been buffered and decoded, it can be triggered instantly.
Each time it is triggered, a different instance of the buffered sound is created.
A key feature of sound effects in games is that there can be many of them simultaneously.
So to take your example of the "machine gun": Imagine you're in the middle of a gunfight a shooting machine gun.
The machine gun fires many times per second, causing tens of sound effects to be played at the same time. This is where Web Audio API really shines.
A simple example for your application:
/* global AudioContext:true,
*/
var clickingBuffer = null;
// Fix up prefixing
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
function loadClickSound(url) {
var request = new XMLHttpRequest();
request.open('GET', url, true);
request.responseType = 'arraybuffer';
// Decode asynchronously
request.onload = function() {
context.decodeAudioData(request.response, function(buffer) {
if (!buffer) {
console.log('Error decoding file data: ' + url);
return;
}
clickingBuffer = buffer;
});
request.onerror = function() {
console.log('BufferLoader: XHR error');
};
request.send();
};
}
function playSound(buffer, time, volume) {
var source = context.createBufferSource(); // creates a sound source
source.buffer = buffer; // tell the source which sound to play
source.connect(context.destination); // connect the source to the context's destination (the speakers)
var gainNode = context.createGain(); // Create a gain node
source.connect(gainNode); // Connect the source to the gain node
gainNode.connect(context.destination); // Connect the gain node to the destination
gainNode.gain.value = volume; // Set the volume
source.start(time); // play the source at the deisred time 0=now
}
// You call with in your document ready
loadClickSound('sounds/click.wav');
//and this plays the sound
playSound(clickingBuffer, 0, 1);
Now you can play around with different timings and volume variations for example by intoducing a random factor
If you need a more complex solution with different clicking sounds (stored in a buffer array) and volume/ distance variations this would be a longer piece of code.
I am making a little app that will play audio files (mp3,wav) with the ability to use an equalizer on them (say a regular Audio Player), for this I am using the Web Audio Api.
I manage to get the play part in two ways. Using decodeAudioData of BaseAudioContext
function getData() {
source = audioCtx.createBufferSource();
var request = new XMLHttpRequest();
request.open('GET', 'viper.ogg', true);
request.responseType = 'arraybuffer';
request.onload = function() {
var audioData = request.response;
audioCtx.decodeAudioData(audioData, function(buffer) {
source.buffer = buffer;
source.connect(audioCtx.destination);
source.loop = true;
},
function(e){ console.log("Error with decoding audio data" + e.err); });
}
request.send();
}
// wire up buttons to stop and play audio
play.onclick = function() {
getData();
source.start(0);
play.setAttribute('disabled', 'disabled');
}
and and much easier way with Audio() and createMediaElementSource()
let audioContainer = new Audio('assets/mp3/pink_noise.wav');
let _sourceNodes = _AudioContext.createMediaElementSource(_audioContainer);
_sourceNodes.connect(_AudioContext.destination);
_audioContainer.play();
I think the second one use less memory than createBufferSource() because createBufferSource() stores the complete audio file in memory. But I am not sure about this I really do not have to much experience with tools like Chrome Dev tools to read it correctly.
Does createMediaElementSource() use less memory than createBufferSource() ?
Edit:
Using Chrome's Task Manager seems like when using createBufferSource() just loading the file increases the Memory column something around 40000k against +/-60k with createMediaElementSource(), and the Javascript Memory 1000k vs 20k
I think you've found the answer in the task manager.
You need to be aware of a couple of things.
With a media element, you lose sample-accurate control; this may not be important to you
You need appropriate access permissions when using a MediaElementAudioSourceNode; this may not be a problem if all of your assets are on the same server
I want to join two audios into one to synchronize with HTML5 on the client side. I've seen it with Web Audio API can do many things, but I have not been able to find how.
I have the link to two audio files (.mp3, .wav ...), what I want is to synchronize these two audio files, like a voice and a song. I do not want them together one after another, want to sync.
I would do it all on the client side using HTML5, without need to use the server. Is this possible to do?
Thank you so much for your help.
So I understand it, you have two audio files which you want to render together on the client. The web audio API can do this for you quite easily entirely in JavaScript. A good place to start is http://www.html5rocks.com/en/tutorials/webaudio/intro/
An example script would be
var context = new(window.AudioContext || window.webkitAudioContext) // Create an audio context
// Create an XML HTTP Request to collect your audio files
// https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest
var xhr1 = new XMLHttpRequest();
var xhr2 = new XMLHttpRequest();
var audio_buffer_1, audio_buffer_2;
xhr1.open("GET","your_url_to_audio_1");
xhr1.responseType = 'arraybuffer';
xhr1.onload = function() {
// Decode the audio data
context.decodeAudioData(request.response, function(buffer) {
audio_buffer_1 = buffer;
}, function(error){});
};
xhr2.open("GET","your_url_to_audio_2");
xhr2.responseType = 'arraybuffer';
xhr2.onload = function() {
// Decode the audio data
context.decodeAudioData(request.response, function(buffer) {
audio_buffer_2 = buffer;
}, function(error){});
};
xhr1.send();
xhr2.send();
These would load into global variables audio_buffer_1 and audio_buffer_2 the Web Audio API buffer nodes (https://webaudio.github.io/web-audio-api/#AudioBuffer) of your two files.
Now to create a new audio buffer, you would need to use offline audio context
// Assumes both buffers are of the same length. If not you need to modify the 2nd argument below
var offlineContext = new OfflineAudioContext(context.destination.channelCount,audio_buffer_1.duration*context.sampleRate , context.sampleRate);
var summing = offlineContext.createGain();
summing.connect(offlineContext.destination);
// Build the two buffer source nodes and attach their buffers
var buffer_1 = offlineContext.createBufferSource();
var buffer_2 = offlineContext.createBufferSource();
buffer_1.buffer = audio_buffer_1;
buffer_2.buffer = audio_buffer_2;
// Do something with the result by adding a callback
offlineContext.oncomplete = function(renderedBuffer){
// Place code here
};
//Begin the summing
buffer_1.start(0);
buffer_2.start(0);
offlineContext.startRendering();
Once done you will receive a new buffer inside the callback function called renderedBuffer which will be the direct summation of the two buffers.
I have developed a simple custom chromecast receiver for a game.
In it I play short sounds from the receiver javascript using:
this.bounceSound = new Audio("paddle.ogg");
to create the audio object when the game is loaded, and then using:
this.bounceSound.play();
to play the sound when needed in the game.
This works fine in chrome on my laptop, but when running the receiver in my chromecast some sounds don't play, others are delayed.
Could this be a problem with my choice of sound format (.ogg) for the audio files?
If not, what else could be the problem.
Are their any best practices on the details of the sounds files (frequency, bit depth, etc?).
Thanks
Just for the record to avoid future confusion of other developers trying to load and play back multiple short sounds at the same time:
On Chromecast, the HTML video and audio tags can only support a
single active media element at a time.
(Source: https://plus.google.com/+LeonNicholls/posts/3Fq5jcbxipJ - make sure to read the rest, it contains also important information about limitations)
Only one audio element will be loaded, others get error code 4 (that was at least the case during my debugging sessions). The correct way of loading and playing back several short sounds is using the Web Audio API as explained by Leon Nicholls in his Google+ post I linked to above.
Simple Web Audio API Wrapper
I whipped up a crude replacement for the HTMLAudioElement in JavaScript that is based on the Web Audio API:
function WebAudio(src) {
if(src) this.load(src);
}
WebAudio.prototype.audioContext = new AudioContext;
WebAudio.prototype.load = function(src) {
if(src) this.src = src;
console.log('Loading audio ' + this.src);
var self = this;
var request = new XMLHttpRequest;
request.open("GET", this.src, true);
request.responseType = "arraybuffer";
request.onload = function() {
self.audioContext.decodeAudioData(request.response, function(buffer) {
if (!buffer) {
if(self.onerror) self.onerror();
return;
}
self.buffer = buffer;
if(self.onload)
self.onload(self);
}, function(error) {
self.onerror(error);
});
};
request.send();
};
WebAudio.prototype.play = function() {
var source = this.audioContext.createBufferSource();
source.buffer = this.buffer;
source.connect(this.audioContext.destination);
source.start(0);
};
It can be used as follows:
var audio1 = new WebAudio('sounds/sound1.ogg');
audio1.onload = function() {
audio1.play();
}
var audio2 = new WebAudio('sounds/sound2.ogg');
audio2.onload = function() {
audio2.play();
}
You should download the sounds up front before you start the game. Also be aware that these sounds will then be stored in memory and Chromecast has very limited memory for that. Make sure these sounds are small and will all fit into memory.
I have trouble with frequency animation of sounds through web audio api on android chrome v.37
I can hear music but animation doesn't present.
A lot of experiments lead me in final to two separate ways for loading sounds and animate it.
In first way i load sound via aduio html 5 element. Then create MediaElementSource with audio element as parameter.
Connect MediaElementSource to Analyser(AudioContext.createAnalyser element).
Analyse i connect to GainNode, and finaly connect GainNode to AudioContext.destination.
Code:
var acontext = new AudioContext();
var analyser = acontext.createAnalyser();
var gainNode = acontext.createGain();
var audio = new Audio(path_to_file);
var source = acontext.createMediaElementSource(temp_audio);
source.connect(analyser);
analyser.connect(gainNode);
gainNode.connect(acontext.destination);
This schema work on PC-Chrome and newest mobile safari.
Also in FireFox.
Second way which i found have few differences.
Sounds here readed to buffer, and then connect to analyser.
code:
var acontext = new AudioContext();
var analyser = acontext.createAnalyser();
var gainNode = acontext.createGain();
var source = acontext.createBufferSource();
var request = new XMLHttpRequest();
request.open('GET', path, true);
request.responseType = 'arraybuffer';
request.addEventListener('load', function(){ source.buffer = acontext.createBuffer(request.response, false); }, false);
request.send();
source.connect(analyser);
analyser.connect(gainNode);
gainNode.connect(acontext.destination);
For draw animation i use canvas, data for draw:
analyser.fftSize = 1024;
analyser.smoothingTimeConstant = 0.92;
var dataArray = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(dataArray); //fill dataArray from analyser
for (var i = 0; i < analyser.frequencyBinCount; i++) {
barHeight = dataArray[i];
// and other logic here.
}
Second way work on old chromes, mobile browsers, safari.
But in android chrome v37 both way doesn't work. As i said before first way doesn't show animation, the second one just break with error - acontext.createBuffer() request 3 parameters instead of 2.
As i understand in new Web Audio Api version this method was rewritten for newest call type, with different parameters, so i don't use it.
Any advices how to force Android Chrome v.37 work here?
I found crossbrowser solution.
acontext.decodeAudioData(request.response, function (buffer) {
source.buffer = buffer
}
This way work right on all browsers. But i decline the audio tags and load sounds over XmlHTTPRequest. If you know way to get buffer from audio element for decode it in decodeAudioData - please comment how.