Live processing getUserMedia audio using the ScriptProcessorNode - javascript

I'm trying to get some live data on microphone data. So I hooked up a ScriptProcessorNode to the output of my live audio as follows (coffeescript):
audioSource = navigator.getUserMedia({audio:true},(stream)->
source = context.createMediaStreamSource(stream)
analyser = context.createScriptProcessor(1024,1,1)
source.connect(analyser)
analyser.onaudioprocess = (e)->
\\Processing Takes Place here
However the onaudioprocess functions is never being called. What do I need to do to make it run?

ScriptProcesser's onaudioprocess event will not start if its output is not connected to some other node.
You can check this fiddle to see it in action.
var scr = context.createScriptProcessor(1024,1,1);
// uncomment the line below and onaudioprocess will start
//scr.connect(context.destination);
scr.onaudioprocess = function(){
console.log('test');
};
Simply connect the output of your ScriptProcessor to context.destination or a dummy gain node and onaudioprocess will start.

Try like this. I think it will work for you.
var source = context.createMediaStreamSource(stream);
var proc = context.createScriptProcessor(2048, 2, 2);
source.connect(proc);
proc.connect(context.destination);
proc.onaudioprocess = function(event)
{
var audio_data = event.inputBuffer.getChannelData(0)|| new Float32Array(2048);
console.log(audio_data);
iosocket.emit('audiomsg',audio_data);
}

Related

Why does setSinkId does not work with BiquadFilters?

I'm trying to make music player with live ability to change output device (headphones or speakers).
I have working function to change destination device with setSinkId.
I also have working Biquad Filters (low pass, high pass...) and audio processor to generate gain level bars (image below).
Filter sliders and gain bars
Some of the code (it's a lot).
Setting output device:
if (audiooutput_ch1active == 0) {
if (typeof audiooutput_headphonesid !== "undefined") {
audiooutput_ch1active = 1;
await audio_ch1.setSinkId(audiooutput_headphonesid);
return;
}
} else if (audiooutput_ch1active == 1) {
if (typeof audiooutput_speakersid !== "undefined") {
audiooutput_ch1active = 0;
await audio_ch1.setSinkId(audiooutput_speakersid);
return;
}
}
Defining filters:
var filter_ch1_lowpass = audioCtx_ch1.createBiquadFilter();
filter_ch1_lowpass.type = "lowpass";
filter_ch1_lowpass.frequency.value = 12000;
var filter_ch1_highpass = audioCtx_ch1.createBiquadFilter();
filter_ch1_highpass.type = "highpass";
filter_ch1_highpass.frequency.value = 0;
var filter_ch1_lowshelf = audioCtx_ch1.createBiquadFilter();
filter_ch1_lowshelf.type = "lowshelf";
filter_ch1_lowshelf.frequency.value = 100;
filter_ch1_lowshelf.gain.value = 0;
Connecting filters and processor:
audio_ch1.src = path;
source_ch1 = audioCtx_ch1.createMediaElementSource(audio_ch1);
source_ch1.connect(filter_ch1_lowpass);
filter_ch1_lowpass.connect(filter_ch1_highpass);
filter_ch1_highpass.connect(filter_ch1_lowshelf);
filter_ch1_lowshelf.connect(processor_ch1);
filter_ch1_lowshelf.connect(audioCtx_ch1.destination);
processor_ch1.connect(filter_ch1_lowshelf);
When I connect filters to my audio context, I can't use setSinkId - Error: Uncaught (in promise) DOMException: The operation could not be performed and was aborted
When I skip code that connects filters, setSinkId works fine.
Does setSinkId not support audio context filters?
I'm new to JavaScript audio.
You are setting the sink of an <audio> element after you did route that <audio>'s output to your audio context. The <audio> already lost the control over its output, it's now the audio context that has control over it, you thus can't set the sink of the <audio> element anymore.
That you can call this method when not using the filter is part of large Chrome bug in the Web Audio API which I believe they're actively working on: basically, they really create the connections only when the audio graph gets connected to a destination.
You could avoid that error to be thrown by creating a MediaStream from your <audio>, by calling its captureStream() method, and then connecting a MediaStreamAudioSourceNode made from that MediaStream object to your audio graph. However you'd still only be setting the sink of the input <audio>, and not of the one you've been modifying in your audio context.
So instead, what you probably want is to set the sink of the audio context directly.
There is an active proposal to add a setSinkId() method on the AudioContext interface, latest Chrome Canary does expose it. However it's still a proposal, and AFAICT only Canary exposes it. So in a near future you'll just have to do
audioCtx_ch1.setSinkId(audiooutput_speakersid).catch(handlePossibleError);
but for now, you'll need to do some gymnastic by first creating a MediaStreamAudioDestinationNode where you'll connect your audio graph, set its MediaStream as the srcObject of another <audio> element on which you'll call setSinkId().
udio_ch1.src = path;
// might be better to use createMediaStreamSource(audio_ch1.captureStream()) anyway here
source_ch1 = audioCtx_ch1.createMediaElementSource(audio_ch1);
source_ch1.connect(filter_ch1_lowpass);
filter_ch1_lowpass.connect(filter_ch1_highpass);
filter_ch1_highpass.connect(filter_ch1_lowshelf);
filter_ch1_lowshelf.connect(processor_ch1);
processor_ch1.connect(filter_ch1_lowshelf);
const outputNode = audioCtx_ch1.createMediaStreamDestination();
filter_ch1_lowshelf.connect(outputNode);
const outputElem = new Audio();
outputElem.srcObject = outputNode.stream;
outputElem.setSinkId(audiooutput_speakersid);
outputElem.play();

Web Audio: Make event occur at exact moment?

I am trying to make something where sound samples are chosen randomly at intervals so that the song evolves and is different each time it's listened to. HTML Audio was not sufficient for this, because the timing was imprecise, so I am experimenting with Web Audio, but it seems quite complicated. For now, I just want to know how to make a new audio file play at 16 seconds exactly, or 32 seconds, etc. I came across something like this
playSound.start(audioContext.currentTime + numb);
But as of now I cannot make it work.
var audioContext = new audioContextCheck();
function audioFileLoader(fileDirectory) {
var soundObj = {};
soundObj.fileDirectory = fileDirectory;
var getSound = new XMLHttpRequest();
getSound.open("GET", soundObj.fileDirectory, true);
getSound.responseType = "arraybuffer";
getSound.onload = function() {
audioContext.decodeAudioData(getSound.response, function(buffer) {
soundObj.soundToPlay = buffer;
});
}
getSound.send();
soundObj.play = function(volumeVal, pitchVal) {
var volume = audioContext.createGain();
volume.gain.value = volumeVal;
var playSound = audioContext.createBufferSource();
playSound.playbackRate.value = pitchVal;
playSound.buffer = soundObj.soundToPlay;
playSound.connect(volume);
volume.connect(audioContext.destination)
playSound.start(audioContext.currentTime)
}
return soundObj;
};
var harp1 = audioFileLoader("IRELAND/harp1.ogg");
var harp2 = audioFileLoader("IRELAND/harp2.ogg");
function keyPressed() {
harp1.play(.5, 2);
harp2.start(audioContext.currentTime + 7.5);
}
window.addEventListener("keydown", keyPressed, false);
You see I am trying to make harp2.ogg play immediately when harp1.ogg finishes. Eventually I want to be able to choose the next file randomly, but for now I just need to know how to make it happen at all. How can I make harp2.ogg play exactly at 7.5 seconds after harp1.ogg begins (or better yet, is there a way to trigger it when harp2 ends (without a gap in audio)?) Help appreciated, thanks!
WebAudio should be able to start audio very precisely using start(time), down to the nearest sample time. If it doesn't, it's because the audio data from decodeAudioData doesn't contain the data you expected, or it's a bug in your browser.
Looks like when you call keyPressed, you want to trigger both songs to start playing. One immediately, and the other in 7.5 seconds.
The function to play the songs is soundObj.play and it needs to take an additional argument, which is the audioContext time to play the song. Something like: soundObj.play = function(volumeVal, pitchVal, startTime) {...}
The function keyPressed() block should look something like this:
harp1.play(.5, 2, 0);
harp2.start(1, 1, audioContext.currentTime + 7.5);
audioContext.resume();
audioContext.resume() starts the actual audio (or rather starts the audio graph timing so it does things you've scheduled)

Is there any way to get current caption's text from video tag using Video.js?

I want to get current subtitles' text during playing a video (and than implement own subtitles block (i.e. to hide original) and also use the information in a few different ways). Currently I use videojs for my player. Is there any way to get current caption's string from it?
This code gets the current cue and put into the <span> element
(function(){
var video = document.querySelector('video');
var span = document.querySelector('span');
if (!video.textTracks) return;
var track = video.textTracks[0];
track.mode = 'hidden';
track.oncuechange = function(e) {
var cue = this.activeCues[0];
if (cue) {
span.innerHTML = '';
span.appendChild(cue.getCueAsHTML());
}
};
})()
Here is the tutorial :Getting Started With the Track Element
Yes, you can add a cuechange event listener when your video loads. This can get you the current track's caption text. Here is my working example using videojs.
var videoElement = document.querySelector("video");
var textTracks = videoElement.textTracks;
var textTrack = textTracks[0];
var kind = textTrack.kind // e.g. "subtitles"
var mode = textTrack.mode
Try this one
Use the HTML5 Video API to get the current source, the split the src using / and . to get the name.
Media API
The above link has all the available API in the HTML5 video player. Your plugin uses HTML5 video player!
Solution:
videojs("example_video_1").ready(function() {
myvideo = this;
var aTextTrack = this.textTracks()[0];
aTextTrack.on('loaded', function() {
console.log('here it is');
cues = aTextTrack.cues(); //subtitles content is here
console.log('Ready State', aTextTrack.readyState())
console.log('Cues', cues);
});
//this method call triggers the subtitles to be loaded and loaded trigger
aTextTrack.show();
});
Result:
PS. Code found here.

How to fix frozen div when using compressor.reduction.value to monitor compression reduction

My problem is the following.
I am attempting to connect the compressor.reduction.value of the compressor node to a div's height so I can monitor the compression reduction effect dynamically. This works fine. The problem is when the audio signal stops the div freezes at its current position. I would like the div to not freeze and have it's height go to zero. The way I fixed this is by using a setInterval that checks for the height of the div and if it remains at exactly the same height for more than a few seconds then the display is set to none effectively hiding the div. Now my question is two fold. First, if there is a better way to do this please share, but irrespective there is one little thing that is irking me that I can't figure out. When I write the code as such it works. However, it looks a bit ugly since the compressor node is outside the play function..........
var compressor = audioContext.createDynamicsCompressor();
soundObj.play = function() {
$(".compression-meter").css("display", "block");
var playSound = audioContext.createBufferSource();
compressor.threshold.value = -40;
compressor.ratio.value = 20;
playSound.buffer = soundObj.soundToPlay;
playSound.connect(compressor);
compressor.connect(audioContext.destination)
playSound.start(audioContext.currentTime);
compReductionMeter()
}
/*______________ Compressor metering __________*/
var cachedMeterValue = null
function compReductionMeter() {
cachedMeterValue = $(".compression-meter").height()
var reduction = compressor.reduction.value;
var bar = $(".compression-meter");
bar.height((-1 * reduction) + '%');
requestAnimationFrame(compReductionMeter);
};
window.setInterval(function() {
if ($(".compression-meter").height() == cachedMeterValue) {
console.log("checking compression meter height when matched with cachedMeterValue.It is " + $(".compression-meter").height())
$(".compression-meter").css("display", "none")
}
}, 2000);
When I write the code like this the div doesn't even appear and I am not sure why. From my view it "should" work and I really want to know why it doesn't and what I'm missing.
soundObj.play = function() {
$(".compression-meter").css("display", "block");
var playSound = audioContext.createBufferSource();
var compressor = audioContext.createDynamicsCompressor(); // modified placement
compressor.threshold.value = -40;
compressor.ratio.value = 20;
playSound.buffer = soundObj.soundToPlay;
playSound.connect(compressor);
compressor.connect(audioContext.destination)
playSound.start(audioContext.currentTime);
compReductionMeter(compressor.reduction.value) // modified - added argument
}
/*______________ Compressor metering __________*/
var cachedMeterValue = null
function compReductionMeter(compVal) { // modified - added parameter
cachedMeterValue = $(".compression-meter").height()
var reduction = compVal; // modified - is now param value
var bar = $(".compression-meter");
bar.height((-1 * reduction) + '%');
requestAnimationFrame(compReductionMeter);
};
window.setInterval(function() {
if ($(".compression-meter").height() == cachedMeterValue) {
console.log("checking compression meter height when matched with cachedMeterValue.It is " + $(".compression-meter").height())
$(".compression-meter").css("display", "none")
}
}, 2000);
Thank you.
This annoyance in DynamicsComporessorNode will be fixed at Chrome version M40.
https://codereview.chromium.org/645853010/
Unfortunately, the current design of DynamicCompressorNode keeps the gain reduction value from being updated when the stream from source node stops. That is, the GR value is only being updated when the active audio stream is running. AnalyserNode has the very same issue.
If your audio graph is simply using a single source node, you can use .onended event from the source node to zero the height of DIV. However, if you rather have a complex audio graph, then it is going to be a bit more involved.
http://www.w3.org/TR/webaudio/#dfn-onended_AudioBufferSourceNode
Here is a possible hack to get zeroes to the compressor and analyzer. Create a new buffer of all zeroes. Assign that to a new AudioBufferSourceNode. Connect this node to the compressor and/or analyser and schedule the source to start when your source ends (or slightly before). This should keep the compressor/analyser node processing so the GR value and analyser node to drop to zero.
I didn't actually try this.

Chrome issue changing source of audio element when using AudioContext

The Audio sound is distorded (like playing twice at the same time or something like that) when I change its source dynamically, if the element was used in the createMediaElementSource of an AudioContext.
Here is a minimalist example of the error: http://jsfiddle.net/BaliBalo/wkFpv/ (It works well at first but it is going crazy when you click a link to change the source).
var audio = document.getElementById('music');
var actx = new webkitAudioContext();
var node, processor = actx.createScriptProcessor(1024, 1, 1);
processor.onaudioprocess = function(e) { /* STUFF */ };
processor.connect(actx.destination);
audio.addEventListener('canplay', canPlayFired);
function canPlayFired(event)
{
node = actx.createMediaElementSource(audio);
node.connect(processor);
audio.removeEventListener('canplay', canPlayFired);
}
$('a.changeMusic').click(function(e){
e.preventDefault();
audio.src = $(this).attr('href');
});
If I put node.disconnect(0); before audio.src = ... it works but the data is no more processed. I tried a lot of thing like creating a new context but it seems not to erase the previously set javascript node.
Do you know how I could fix it ?
Thanks in advance.
I would suggest taking a look at this: jsbin.com/acolet/1
It seems to be doing the same thing you are looking for.
I found this from this post.

Categories