How to detect the frequence of a sound in JavaScript? - javascript

How can I detect if a sound is playing in the hearing range?
For example,
Show a silence icon if:
The range is lower than 20 Hz,
The range is higher than 20 kHz.
instead stream.paused is false.

You can use this pitch detection audio library.
Example:
var voice = new Wad({source : 'mic' });
var tuner = new Wad.Poly();
tuner.add(voice);
voice.play();
tuner.updatePitch() // The tuner is now calculating the pitch and note name of its input 60 times per second. These values are stored in tuner.pitch and tuner.noteName.
var logPitch = function(){
console.log(tuner.pitch, tuner.noteName)
requestAnimationFrame(logPitch)
};
logPitch();
// If you sing into your microphone, your pitch will be logged to the console in real time.
tuner.stopUpdatingPitch(); // Stop calculating the pitch if you don't need to know it anymore.

I don't know if you can find the hz with javascript. You'll probably have to use the web audio API. However, with javascript you can definitely check and set volume on a numeric scale. Check out the details and sample code here: http://www.w3schools.com/tags/av_prop_volume.asp
Example values:
1.0 is highest volume (100%. This is default)
0.5 is half volume (50%)
0.0 is silent (same as mute)

Related

How to generate negative audio waves from input sound in javascript?

I found a javascript that captures the current microphone input just to send it out again. You can see it here:
https://codepen.io/MyXoToD/pen/bdb1b834b15aaa4b4fcc8c7b50c23a6f?editors=1010 (only works with https).
I was wondering how I can generate the "negative" waves of the sound captured. Just like "noise cancellation" works. So the script detects the current noise around me and whenever there is a wave going up I want to generate a wave that is going down to cancel each other out. Is there a way to do this in javascript?
So this outputs the current sound recored:
var input = audio_context.createMediaStreamSource(stream);
volume = audio_context.createGain();
volume.gain.value = 0.8;
input.connect(volume);
volume.connect(audio_context.destination);
The question is how to produce the negative of this sound and output this instead.
You're already streaming your signal through a Web Audio chain. What you need to do is chain an AudioWorker node between the input and output nodes (after or before the gain node, doesn't matter). The script for the worker would be a very simple sign inversion.
var negationWorker = audio_context.createAudioWorker( "negate.js" )
var negationNode = negationWorker.createNode()
input.connect(negationNode)
negationNode.connect(volume)

Get Final Output Frequency of Chained Oscillators

I've set up a web page with a theremin and I'm trying to change the color of a web page element based on the frequency of the note being played. The way I'm generating sound right now looks like this:
osc1 = page.audioCX.createOscillator();
pos = getMousePos(page.canvas, ev);
osc1.frequency.value = pos.x;
gain = page.audioCX.createGain();
gain.gain.value = 60;
osc2 = page.audioCX.createOscillator();
osc2.frequency.value = 1;
osc2.connect(gain);
gain.connect(osc1.frequency);
osc1.connect(page.audioCX.destination);
What this does is oscillate the pitch of the sound created by osc1. I can change the color to the frequency of osc1 by using osc1.frequency.value, but this doesn't factor in the changes applied by the other parts.
How can I get the resultant frequency from those chained elements?
You have to do the addition yourself (osc1.frequency.value + output of gain).
The best current (but see below) way to get access to the output of gain is probably to use a ScriptProcessorNode. You can just use the last sample from each buffer passed to the ScriptProcessorNode, and set the buffer size based on how frequently you want to update the color.
(Note on ScriptProcessorNode: There is a bug in Chrome and Safari that makes ScriptProcessorNode not work if it doesn't have at least one output channel. You'll probably have to create it with one input and one output, have it send all zeros to the output, and connect it to the destination, to get it to work.)
Near-future answer: You can also try using an AnalyserNode, but under the current spec, the time domain data can only be read from an AnalyserNode as bytes, which means the floating point samples are being converted to be in the range [0, 255] in some unspecified way (probably scaling the range [-1, 1] to [0, 255], so the values you need would be clipped). The latest draft spec includes a getFloatTimeDomainData method, which is probably your cleanest solution. It seems to have already been implemented in Chrome, but not Firefox, as far as I can tell.

How could I set a loop in a AudioBufferSourceNode to exact match specific offset in a the underlying buffer?

When creating a sound with AudioBufferSourceNode I can set the offset and the duration in seconds.
I have the offset and duration in sample positions which I suppose I have to convert to time, and I don't know where to start. Is it possible to get an exact match?
There seems there was sample offset and length in an earlier version of web-api but no more.
From the documentation: (w3c)
Please note that as a low-level implementation detail, the AudioBuffer
is at a specific sample-rate (usually the same as the AudioContext
sample-rate), and that the loop times (in seconds) must be converted
to the appropriate sample-frame positions in the buffer according to
this sample-rate.
the match should be exact, just divide your sample-position by the sample-rate,
second_offset = sample_offset / sample_rate
and
second_duration = sample_duration / sample_rate

How to set the loudness of HTML5 audio?

In an HTML5 game I'm making, I play a "thud" sound when things collide. However, it is a bit unrealistic. No matter the velocity of the objects, they will always make the same, relatively loud "thud" sound. What I'd like to do is to have that sound's loudness depend on velocity, but how do I do that? I only know how to play a sound.
playSound = function(id)
{
sounds[id].play();
}
sounds is an array full of new Audio("url")'s.
Use the audio element's volume property. From W3:
The element's effective media volume is volume, interpreted relative
to the range 0.0 to 1.0, with 0.0 being silent, and 1.0 being the
loudest setting, values in between increasing in loudness. The range
need not be linear. The loudest setting may be lower than the system's
loudest possible setting; for example the user could have set a
maximum volume.
Ex: sounds[id].volume=.5;
You can even play around with the gain and make the volume play louder than 100%. You can use this function to amplify the sound:
function amplifyMedia(mediaElem, multiplier) {
var context = new (window.AudioContext || window.webkitAudioContext),
result = {
context: context,
source: context.createMediaElementSource(mediaElem),
gain: context.createGain(),
media: mediaElem,
amplify: function(multiplier) { result.gain.gain.value = multiplier; },
getAmpLevel: function() { return result.gain.gain.value; }
};
result.source.connect(result.gain);
result.gain.connect(context.destination);
result.amplify(multiplier);
return result;
}
You could do something like this to set the initial amplification to 100%:
var amp = amplifyMedia(sounds[id], 1);
Then if you need the sound to be twice as loud you could do something like this:
amp.amplify(2);
If you want to half it you can do this:
amp.amplify(0.5);
A full write up of the function is found here:
http://cwestblog.com/2017/08/17/html5-getting-more-volume-from-the-web-audio-api/
You can adjust the volume by setting:
setVolume = function(id,vol) {
sounds[id].volume = vol; // vol between 0 and 1
}
However, bear in mind that there is a small delay between the volume being set, and it taking effect. You may hear the sound start to play at the previous volume, then jump to the new one.

Render image from audio

Is there a possibility to render an visualization of an audio file?
Maybe with SoundManager2 / Canvas / HTML5 Audio?
Do you know some technics?
I want to create something like this:
You have a tone of samples and tutorials here : http://www.html5rocks.com/en/tutorials/#webaudio
For the moment it work in the last Chrome and the last last Firefox (Opera ?).
Demos : http://www.chromeexperiments.com/tag/audio/
To do it now, for all visitors of a web site, you can check SoundManagerV2.js who pass through a flash "proxy" to access audio data http://www.schillmania.com/projects/soundmanager2/demo/api/ (They already work on the HTML5 audio engine, to release it as soon as majors browsers implement it)
Up to you for drawing in a canvas 3 differents audio data : WaveForm, Equalizer and Peak.
soundManager.defaultOptions.whileplaying = function() { // AUDIO analyzer !!!
$document.trigger({ // DISPATCH ALL DATA RELATIVE TO AUDIO STREAM // AUDIO ANALYZER
type : 'musicLoader:whileplaying',
sound : {
position : this.position, // In milliseconds
duration : this.duration,
waveformDataLeft : this.waveformData.left, // Array of 256 floating-point (three decimal place) values from -1 to 1
waveformDataRight: this.waveformData.right,
eqDataLeft : this.eqData.left, // Containing two arrays of 256 floating-point (three decimal place) values from 0 to 1
eqDataRight : this.eqData.right, // ... , the result of an FFT on the waveform data. Can be used to draw a spectrum (frequency range)
peakDataLeft : this.peakData.left, // Floating-point values ranging from 0 to 1, indicating "peak" (volume) level
peakDataRight : this.peakData.right
}
});
};
With HTML5 you can get :
var freqByteData = new Uint8Array(analyser.frequencyBinCount);
var timeByteData = new Uint8Array(analyser.frequencyBinCount);
function onaudioprocess() {
analyser.getByteFrequencyData(freqByteData);
analyser.getByteTimeDomainData(timeByteData);
/* draw your canvas */
}
Time to work ! ;)
Run samples through an FFT, and then display the energy within a given range of frequencies as the height of the graph at a given point. You'll normally want the frequency ranges going from around 20 Hz at the left to roughly the sampling rate/2 at the right (or 20 KHz if the sampling rate exceeds 40 KHz).
I'm not so sure about doing this in JavaScript though. Don't get me wrong: JavaScript is perfectly capable of implementing an FFT -- but I'm not at all sure about doing it in real time. OTOH, for user viewing, you can get by with around 5-10 updates per second, which is likely to be a considerably easier target to reach. For example, 20 ms of samples updated every 200 ms might be halfway reasonable to hope for, though I certainly can't guarantee that you'll be able to keep up with that.
http://ajaxian.com/archives/amazing-audio-sampling-in-javascript-with-firefox
Check out the source code to see how they're visualizing the audio
This isn't possible yet except by fetching the audio as binary data and unpacking the MP3 (not JavaScript's forte), or maybe by using Java or Flash to extract the bits of information you need (it seems possible but it also seems like more headache than I personally would want to take on).
But you might be interested in Dave Humphrey's audio experiments, which include some cool visualization stuff. He's doing this by making modifications to the browser source code and recompiling it, so this is obviously not a realistic solution for you. But those experiments could lead to new features being added to the <audio> element in the future.
For this you would need to do a Fourier transform (look for FFT) which will be slow in javascript, and not possible in realtime at present.
If you really want to do this in the browser, I would suggest doing it in java/silverlight, since they deliver the fastest number crunching speed in the browser.

Categories