With the Web Audio API, is there a way to discover a node's connections?
For example, given
ctx = new AudioContext();
g1 = ctx.createGain();
g2 = ctx.createGain();
g1.connect(g2);
is there a method I can call on g1 that will return [g2]?
I'm interested in writing a javascript library to visualize the current audio graph, similar to the Firefox Web Audio Editor.
You could potentially do something like this:
var connect = AudioNode.prototype.connect;
var disconnect = AudioNode.prototype.disconnect;
AudioNode.prototype.connect = function( dest ) {
this._connections || ( this._connections = [] );
if ( this._connections.indexOf( dest ) === -1 ) {
this._connections.push( dest );
}
return connect.apply( this, arguments );
};
AudioNode.prototype.disconnect = function() {
this._connections = [];
return disconnect.apply( this, arguments );
};
This is a quick example, and it doesn't account for disconnect arguments. But something along those lines could work, I think.
There are good reasons not to do something like this. But
it would allow you to keep the application code generic, which is really what you need if you want to be able to visualize arbitrary audio graphs.
The short answer is no - there is no such method. You'll have to keep track of your connections yourself.
Related
I have some troubles while trying to reproduce different frequencies using two different audio channels (left and right) in JavaScript. I've been searching in StackOverflow and Internet for a while, but I didn't find anything that could help me, so I decided to ask here for help.
Let me explain first why I'm doing this. There's a lot of people in the world that have tinnitus (an "illness" where you hear a specific frequency in an ear or in both). Sometimes, people sat that tinnitus is not a big trouble. The website is gonna allow the users to know how a "tinnitus person" hear. For accomplishing that, the audio must be different in both ears, so I need to send different frequencies in two different channel audio.
This is the code I already have, it reproduces a specific frequency in mono (full app here: replit.com/Tupiet/hearing):
function letsStart() {
try{
window.AudioContext = window.AudioContext || window.webKitAudioContext;
context = new AudioContext();
}
catch(e) {
alert("API isn't working");
}
}
function initFrequency() {
let range = document.getElementById('range').value;
osc = context.createOscillator();
osc.frequency.value = range;
osc.connect(context.destination);
osc
osc.start(0);
document.querySelector(".show-frequency").innerHTML = range + "Hz";
}
The code above is playing a specific frequency in mono mode, but as I expected, I need to play it in a specific channel audio.
By the way, the only question I found that I thought it could help me was this one, but I think it's not what I'm searching since it doesn't work with frequencies.
How can I do it? I couldn't an explanation anywhere. Really really thanks!
You can achieve the desired result by using a ChannelMergerNode. It can be used to piece together a stereo signal.
Here is an example with two independent oscillators.
const audioContext = new AudioContext();
const leftOscillator = audioContext.createOscillator();
const leftGain = audioContext.createGain();
const rightOscillator = audioContext.createOscillator();
const rightGain = audioContext.createGain();
const merger = audioContext.createChannelMerger(2);
leftOscillator.connect(leftGain).connect(merger, 0, 0);
rightOscillator.connect(rightGain).connect(merger, 0, 1);
merger.connect(audioContext.destination);
leftOscillator.frequency.value = 800;
leftGain.gain.value = 0.5;
leftOscillator.start(0);
rightOscillator.frequency.value = 1400;
rightGain.gain.value = 0.8;
rightOscillator.start(0);
pretty new to ML and tensorflow!
I made an object detection model with http://cloud.annotations.ai that permits to train and convert a model in different formats, tfjs (model_web) too.
That website provides also boilerplates for running the model within a browser (react app)... just like you do - probably it is the same code, didn't spend enough time.
So I have this model running inside a browser, giving prediction about objects in a photo with pretty good results considering the amount of example I gave and the prediction score (0.89). the given bounding box is good too.
But, unfortunately, I didn't have "just one video" to analyze frame by frame inside a browser, I've got plenty of them. So I decided to switch to node.js, porting the code as is.
Guess what? TF.js relies on DOM and browser components, and almost none examples that works with Node exists. So not a big deal, just spent a morning figuring out all the missing parts.
Finally I'm able to run my model over videos that are splitted in frames, at a decent speed - although having the "Hello there, use tfjs-node to gain speed" banner when I'm already using tfjs-node - but results seems odd.
Comparing the same picture with the same model_web folder gave the same prediction but with lower score (0.80 instead of 0.89) and a different bounding box, with object not centered at all.
(TL;DR)
Does tfjs have different implementation of the libraries (tfjs and tfjs-node) that makes different use of the same model? I don't think it can be a problem of input because - after a long search and fight - i figure out two ways to give the image to tf.browser.getPixel in Node (and I'm still wondering why I have to use a "browser" method inside tfjs-node). Anyone made comparisons?
So... that's the code I used, for your reference:
model_web is being loaded with tf.loadGraphModel("file://path/to/model_web/model.json");
two different ways to convert a JPG and make it works with tf.browser.getPixel()
const inkjet = require('inkjet');
const {createCanvas, loadImage} = require('canvas');
const decodeJPGInkjet = (file) => {
return new Promise((rs, rj) => {
fs.readFile(file).then((buffer) => {
inkjet.decode(buffer, (err, decoded) => {
if (err) {
rj(err);
} else {
rs(decoded);
}
});
});
});
};
const decodeJPGCanvas = (file) => {
return loadImage(file).then((image) => {
const canvas = createCanvas(image.width, image.height);
const ctx = canvas.getContext('2d');
ctx.drawImage(image, 0, 0, image.width, image.height);
const data = ctx.getImageData(0, 0, image.width, image.height);
return {data: new Uint8Array(data.data), width: data.width, height: data.height};
});
};
and that's the code that use the loaded model to give predictions - same code for node and browser, found at https://github.com/cloud-annotations/javascript-sdk/blob/master/src/index.js - doesn't works on node as it is, I changed require("#tensorflow/tfjs"); with require("#tensorflow/tfjs-node"); and replaced fetch with fs.read
const runObjectDetectionPrediction = async (graph, labels, input) => {
const batched = tf.tidy(() => {
const img = tf.browser.fromPixels(input);
// Reshape to a single-element batch so we can pass it to executeAsync.
return img.expandDims(0);
});
const height = batched.shape[1];
const width = batched.shape[2];
const result = await graph.executeAsync(batched);
const scores = result[0].dataSync();
const boxes = result[1].dataSync();
// clean the webgl tensors
batched.dispose();
tf.dispose(result);
const [maxScores, classes] = calculateMaxScores(
scores,
result[0].shape[1],
result[0].shape[2]
);
const prevBackend = tf.getBackend();
// run post process in cpu
tf.setBackend("cpu");
const indexTensor = tf.tidy(() => {
const boxes2 = tf.tensor2d(boxes, [result[1].shape[1], result[1].shape[3]]);
return tf.image.nonMaxSuppression(
boxes2,
maxScores,
20, // maxNumBoxes
0.5, // iou_threshold
0.5 // score_threshold
);
});
const indexes = indexTensor.dataSync();
indexTensor.dispose();
// restore previous backend
tf.setBackend(prevBackend);
return buildDetectedObjects(
width,
height,
boxes,
maxScores,
indexes,
classes,
labels
);
};
Do different implementation of the libraries (tfjs and tfjs-node) that makes different use of the same model
If the same model is deployed both in the browser and in nodejs, the prediction will be the same thing.
If the predicted value are different, it might be related to the tensor used for the prediction. The processing from the image to the tensor might be different resulting in different tensors being used for the prediction thus causing the output to be different.
i figure out two ways to give the image to tf.browser.getPixel in Node (and I'm still wondering why I have to use a "browser" method inside tfjs-node)
The canvas package use the system graphic to create the browser like canvas environment that can be used by nodejs. This makes it possible to use tf.browser namespace especially when dealing with image conversion. However it is still possible to use directly nodejs buffer to create a tensor.
I wonder, how can I obtain any WebGL program instance (WebGLProgram) from any desired WebGL context?
To fetch the WebGL context is NOT a problem. You are searching the DOM of the current page for the canvas element using document.getElementsByTagName() or document.getElementById(), if you know the exact canvas id:
let canvas = document.getElementById( "canvasId" );
let context = canvas.getContext( "webgl" );
Here we fetch the current context as I suppose, but if I want to get some shader parameters or get certain value from already running vertex/fragment shader - I need to have a WebGL program, which is associated with the current WebGL rendering context.
But I can't find any method in WebGL API like context.getAttachedProgram() or context.getActiveProgram().
So what is the way get the active WebGL program which is used for the rendering process?
Maybe, there is some special WebGL parameter?
There is no way to get all the programs or any other resources from a WebGL context. If the context is already existing the best you can do is look at the current resources with things like gl.getParameter(gl.CURRENT_PROGRAM) etc..
What you can do instead is wrap the WebGL context
var allPrograms = [];
someContext.createProgram = (function(oldFunc) {
return function() {
// call the real createProgram
var prg = oldFunc.apply(this, arguments);
// if a program was created save it
if (prg) {
allPrograms.push(prg);
}
return prg;
};
}(someContext.createProgram));
Of course you'd need to wrap gl.deleteProgram as well to remove things from the array of all programs.
someContext.deleteProgram = (function(oldFunc) {
return function(prg) {
// call the real deleteProgram
oldFunc.apply(this, arguments);
// remove the program from allPrograms
var ndx = allPrograms.indexOf(prg);
if (ndx >= 0) {
allPrograms.splice(ndx, 1);
}
};
}(someContext.deleteProgram));
These are the techniques used by things like the WebGL Inspector and the WebGL Shader Editor Extension.
If you want to wrap all contexts you can use a similar technique to wrap getContext.
HTMLCanvasElement.prototype.getContext = (function(oldFunc) {
return function(type) {
var ctx = oldFunc.apply(this, arguments);
if (ctx && (type === "webgl" || type === "experimental-webgl")) {
ctx = wrapTheContext(ctx);
}
return ctx;
};
}(HTMLCanvasElement.prototype.getContext));
gl.getParameter(gl.CURRENT_PROGRAM). Check out https://www.khronos.org/files/webgl/webgl-reference-card-1_0.pdf pg 2 to the right.
I'm using the online version of Scala-js-fiddle. So far, I've been able to successfully declare an Audio Context:
val ctx = js.Dynamic.newInstance(js.Dynamic.global.AudioContext)()
Now, I want to create an oscillator node. I tried (unsuccessfully):
val oscillator = ctx.js.Dynamic.global.createOscillator()
When I saved this, the Scala-js-fiddle said compilation was successful. However, I also had error messages. The main one was:
TypeError: Cannot read property 'Dynamic' of undefined
How can I properly create an oscillator node and set the value of its frequency using js.Dynamic?
In regular Javascript, I would simply write something like this:
var oscillator = ctx.createOscillator();
oscillator.frequency.value = 400;
Would I have to use js.Global.Function(...)? How would that work?
Solved! As it turns out, after creating an AudioContext like this
val ctx = js.Dynamic.newInstance(js.Dynamic.global.AudioContext)()
you can call the JavaScript methods normally:
val o = ctx.createOscillator()
I have an oscillator to generate the frequencies of a keyboard. It all works when I output to speakers, but as well as outputting to speakers I would like to buffer it so that I can turn it into base 64 and use again later. The only examples of this I have seen use xhr which I do not need as obviously I want to be able to just add a node into the modular routing to take input, store it in an array, then output to the hardware.
Something like this:
var osc = ctx.createOscillator();
osc.type = 3;
osc.frequency.value = freq;
osc.connect(buffer);
buffer.connect(ctx.destination);
Is this possible?
Have you considered utilizing a ScriptProcessorNode?
See: http://www.w3.org/TR/webaudio/#ScriptProcessorNode
You would attach an eventListener to this node, allowing you to capture arrays of audio samples as they pass through. You could then save these buffers and manipulate them as you wish.
Have you checked out RecorderJs? https://github.com/mattdiamond/Recorderjs. I think it does what you need.
I have solved my problem by using Matt's Recorder.js https://github.com/mattdiamond/Recorderjs and connecting it to a GainNode which acts as an intermediary from a number of oscillators to the ctx.destination. I will be using localStorage but here is an example using an array (this does not include the oscillator setup).
var recorder;
recorder = new Recorder(gainNode, { workerPath: "../recorderWorker.js"});
recorder.record();
var recordedSound = [];
function recordSound() {
recorder.exportWAV(function(blob) {
recordedSound.push(blob);
});
}
function play(i) {
var audio = new Audio(window.URL.createObjectURL(recordedSound[i]));
audio.play();
}