I found a javascript that captures the current microphone input just to send it out again. You can see it here:
https://codepen.io/MyXoToD/pen/bdb1b834b15aaa4b4fcc8c7b50c23a6f?editors=1010 (only works with https).
I was wondering how I can generate the "negative" waves of the sound captured. Just like "noise cancellation" works. So the script detects the current noise around me and whenever there is a wave going up I want to generate a wave that is going down to cancel each other out. Is there a way to do this in javascript?
So this outputs the current sound recored:
var input = audio_context.createMediaStreamSource(stream);
volume = audio_context.createGain();
volume.gain.value = 0.8;
input.connect(volume);
volume.connect(audio_context.destination);
The question is how to produce the negative of this sound and output this instead.
You're already streaming your signal through a Web Audio chain. What you need to do is chain an AudioWorker node between the input and output nodes (after or before the gain node, doesn't matter). The script for the worker would be a very simple sign inversion.
var negationWorker = audio_context.createAudioWorker( "negate.js" )
var negationNode = negationWorker.createNode()
input.connect(negationNode)
negationNode.connect(volume)
Related
How do we calculate the time (as an offset to performance.timing.navigationStart) of the beginning of audio recording from microphone using Web Audio API (AudioContext/ScriptProcessor) in Javascript?
With the term "beginning of audio recording", I refer to the audioTimeStamp shown in the figure below. How to calculate it?
Such audioTimeStamp should be associated to the beginning of the audio input stream (whose samples are processed by handling the onaudioprocess event) in the same way a mousedown event.timeStamp is associated to the action of pressing the mouse left button.
To test the correctness of the calculation of audioTimeStamp, let us now suppose a "thought experiment" as an ideal scenario: we produce a sound impulse EXACTLY the same time when we click the left mouse button. Let us suppose the sound impulse is found at the i-th sample of the audio buffer, so that its time can be calculated as impulseTimeStamp = audioTimeStamp + 1000*i/ audioContext.sampleRate. The error can be quantified as audioDelay = impulseTimeStamp - mousedown.event.timeStamp: the more correct the audioTimeStamp calculation, the lower the audioDelay we get.
Standard HTML5 video element seems to work using byte ranges. The problem is that the server I'm using to serve the video content doesn't support byte ranges. It only supports time ranges (as it's based on ffmpeg).
E.g. I can make a query(in seconds) such http://example.com/myvideo.mkv?range=3.40-49 and it returns video content from 3rd second point 40 ms to 49th second.
Question: is it possible to feed the media source buffer using time ranges? How do I know what time ranges does the media buffer needs and when(e.g. if the client seeks using the progress bar)?
I've checked several players such dash.js but they all assume the server supports byte range so I can't use them.
What I've tried so far:
I was thinking to use timestampOffset to feed the array buffer with the correct bytes(time range based) but I don't know when the client seeks and at what position(in seconds). Additionally I think I should fill the range missed but timestampOffset provides only an offset/start point. How do I know the range so that I can avoid overwriting possible already cached/buffered content?
I feel there are a lot of edge cases so I'm wondering if there is already a such player supporting time range instead of byte range. I basically just want to provide a function (videoSourceBuffer, startTime, stopTime) which fills videoSourceBuffer with the video content starting at startTime and stoping at stopTime.
var mediaSource = new MediaSource();
mediaSource.addEventListener('sourceopen', function() {
var sourceBuffer = mediaSource.addSourceBuffer('video/webm; codecs="vorbis,vp8"');
sourceBuffer.timestampOffset = 0;
sourceBuffer.appendBuffer(videoBinary);
});
I create three different linear chirps using the code found here on SO. With some other code snippets I save those three sounds as separate .wav files. This works so far.
Now I want to play those three sounds at the exact same time. So I thought I could use the WebAudio API, feed three oscillator nodes with the float arrays I got from the code above.
But I don't get at least one oscillator node to play its sound.
My code so far (shrinked for one oscillator)
var osc = audioCtx.createOscillator();
var sineData = linearChirp(freq, (freq + signalLength), signalLength, audioCtx.sampleRate); // linearChirp from link above
// sine values; add 0 at the front because the docs states that the first value is ignored
var imag = Float32Array.from(sineData.unshift(0));
var real = new Float32Array(imag.length); // cos values
var customWave = audioCtx.createPeriodicWave(real, imag);
osc.setPeriodicWave(customWave);
osc.start();
At the moment I suppose that I do not quite understand the whole the math behind the peridioc wave.
The code that plays the three sounds at the same time works (with simple sin values in the oscillator nodes), so I assume that the problem is my peridioc wave.
Another question: is there a different way? Maybe like using three MediaElementAudioSourceNode that are linked to my three .wav files. I don't see a way to play them at the exact same time.
The PeriodicWave isn't a "stick a waveform in here and it will be used as a single oscillation" feature - it builds a waveform through specifying the relative strengths of various harmonics. Note that in that code you pointed to, they create a BufferSource node and point its .buffer to the results of linearchirp(). You can do that, too - just use BufferSource nodes to play back the linearshirp() outputs, which (I think?) are just sine waves anyway? (If so, you could just use an oscillator and skip that whole messy "create a buffer" bit.)
If you just want to play back the buffers you've created, use BufferSource. If you want to create complex harmonics, use PeriodicWave. If you've created a single-cycle waveform and you want to play it back as a source waveform, use BufferSource and loop it.
I've set up a web page with a theremin and I'm trying to change the color of a web page element based on the frequency of the note being played. The way I'm generating sound right now looks like this:
osc1 = page.audioCX.createOscillator();
pos = getMousePos(page.canvas, ev);
osc1.frequency.value = pos.x;
gain = page.audioCX.createGain();
gain.gain.value = 60;
osc2 = page.audioCX.createOscillator();
osc2.frequency.value = 1;
osc2.connect(gain);
gain.connect(osc1.frequency);
osc1.connect(page.audioCX.destination);
What this does is oscillate the pitch of the sound created by osc1. I can change the color to the frequency of osc1 by using osc1.frequency.value, but this doesn't factor in the changes applied by the other parts.
How can I get the resultant frequency from those chained elements?
You have to do the addition yourself (osc1.frequency.value + output of gain).
The best current (but see below) way to get access to the output of gain is probably to use a ScriptProcessorNode. You can just use the last sample from each buffer passed to the ScriptProcessorNode, and set the buffer size based on how frequently you want to update the color.
(Note on ScriptProcessorNode: There is a bug in Chrome and Safari that makes ScriptProcessorNode not work if it doesn't have at least one output channel. You'll probably have to create it with one input and one output, have it send all zeros to the output, and connect it to the destination, to get it to work.)
Near-future answer: You can also try using an AnalyserNode, but under the current spec, the time domain data can only be read from an AnalyserNode as bytes, which means the floating point samples are being converted to be in the range [0, 255] in some unspecified way (probably scaling the range [-1, 1] to [0, 255], so the values you need would be clipped). The latest draft spec includes a getFloatTimeDomainData method, which is probably your cleanest solution. It seems to have already been implemented in Chrome, but not Firefox, as far as I can tell.
Is there a possibility to render an visualization of an audio file?
Maybe with SoundManager2 / Canvas / HTML5 Audio?
Do you know some technics?
I want to create something like this:
You have a tone of samples and tutorials here : http://www.html5rocks.com/en/tutorials/#webaudio
For the moment it work in the last Chrome and the last last Firefox (Opera ?).
Demos : http://www.chromeexperiments.com/tag/audio/
To do it now, for all visitors of a web site, you can check SoundManagerV2.js who pass through a flash "proxy" to access audio data http://www.schillmania.com/projects/soundmanager2/demo/api/ (They already work on the HTML5 audio engine, to release it as soon as majors browsers implement it)
Up to you for drawing in a canvas 3 differents audio data : WaveForm, Equalizer and Peak.
soundManager.defaultOptions.whileplaying = function() { // AUDIO analyzer !!!
$document.trigger({ // DISPATCH ALL DATA RELATIVE TO AUDIO STREAM // AUDIO ANALYZER
type : 'musicLoader:whileplaying',
sound : {
position : this.position, // In milliseconds
duration : this.duration,
waveformDataLeft : this.waveformData.left, // Array of 256 floating-point (three decimal place) values from -1 to 1
waveformDataRight: this.waveformData.right,
eqDataLeft : this.eqData.left, // Containing two arrays of 256 floating-point (three decimal place) values from 0 to 1
eqDataRight : this.eqData.right, // ... , the result of an FFT on the waveform data. Can be used to draw a spectrum (frequency range)
peakDataLeft : this.peakData.left, // Floating-point values ranging from 0 to 1, indicating "peak" (volume) level
peakDataRight : this.peakData.right
}
});
};
With HTML5 you can get :
var freqByteData = new Uint8Array(analyser.frequencyBinCount);
var timeByteData = new Uint8Array(analyser.frequencyBinCount);
function onaudioprocess() {
analyser.getByteFrequencyData(freqByteData);
analyser.getByteTimeDomainData(timeByteData);
/* draw your canvas */
}
Time to work ! ;)
Run samples through an FFT, and then display the energy within a given range of frequencies as the height of the graph at a given point. You'll normally want the frequency ranges going from around 20 Hz at the left to roughly the sampling rate/2 at the right (or 20 KHz if the sampling rate exceeds 40 KHz).
I'm not so sure about doing this in JavaScript though. Don't get me wrong: JavaScript is perfectly capable of implementing an FFT -- but I'm not at all sure about doing it in real time. OTOH, for user viewing, you can get by with around 5-10 updates per second, which is likely to be a considerably easier target to reach. For example, 20 ms of samples updated every 200 ms might be halfway reasonable to hope for, though I certainly can't guarantee that you'll be able to keep up with that.
http://ajaxian.com/archives/amazing-audio-sampling-in-javascript-with-firefox
Check out the source code to see how they're visualizing the audio
This isn't possible yet except by fetching the audio as binary data and unpacking the MP3 (not JavaScript's forte), or maybe by using Java or Flash to extract the bits of information you need (it seems possible but it also seems like more headache than I personally would want to take on).
But you might be interested in Dave Humphrey's audio experiments, which include some cool visualization stuff. He's doing this by making modifications to the browser source code and recompiling it, so this is obviously not a realistic solution for you. But those experiments could lead to new features being added to the <audio> element in the future.
For this you would need to do a Fourier transform (look for FFT) which will be slow in javascript, and not possible in realtime at present.
If you really want to do this in the browser, I would suggest doing it in java/silverlight, since they deliver the fastest number crunching speed in the browser.