adding sound "volume" to a bit of javascript code (audiocontext) - javascript

I have this little javascript that I have working, it plays a tone for 30 seconds. I wanted to see if it was possible to modify it so that it would play at say, half volume instead (it's just too loud). I have researched and found "gain" node information, but I looked at the examples and honestly I'm a noob with javascrip and I don't fully understand the examples I saw.
Here is what I have so far (it works good, I just want to "volume" it down some)
function myFunction(frequency, duration, callback) {
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var oscillator = audioCtx.createOscillator();
duration = 30000 / 1000; // the 10000 used to be 'duration'
oscillator.type = 'square';
oscillator.frequency.value = 500; // value in hertz
oscillator.connect(audioCtx.destination);
oscillator.onended = callback;
oscillator.start(0);
oscillator.stop(audioCtx.currentTime + duration);
}
Can anyone help me modify this so I have a volume parameter I can tweak until it is right?

Documentation
The documentation for what you are trying to do can be found under BaseAudioContext, specifically the BaseAudioContext.createGain() method.
The MDN documentation is lacking a little in that it only provides snippets that will not work as is. As such, an overly simplified example is given below, bearing in mind that this may not be best practice.
Explanation
The oscillator and gain control are both audio nodes. As such, you should picture them in a signal chain like that given below.
The oscillator node passes through the gain node which passes through to the audio context.
Solution
Using your current format, as self contain snippet is given below (see jsfiddle here)
<!DOCTYPE html>
<html>
<head>
<script>
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var gainNode = audioCtx.createGain();
gainNode.connect(audioCtx.destination);
var gain = 0.1;
//---------------------------------------------------------------------
gainNode.gain.setValueAtTime(gain, audioCtx.currentTime);
</script>
</head>
<!-- ===================================================================== -->
<body>
<div>
<button onclick="myFunction()">
Click me
</button>
</div>
<script>
function myFunction()
{
//---------------------------------------------------------------------
var duration = 0.5; // in seconds
//---------------------------------------------------------------------
var oscillator = audioCtx.createOscillator();
oscillator.type = 'square';
oscillator.frequency.value = 500;
oscillator.connect(gainNode);
oscillator.start(audioCtx.currentTime);
oscillator.stop(audioCtx.currentTime + duration);
//---------------------------------------------------------------------
}
</script>
</body>
<!-- ===================================================================== -->
</html>
Best practice
For best practice, I would suggest you should avoid recreating nodes over and over again. Rather, I would simply turn the gain up for a short period, then turn it back down, an example of which is given below.
As far as I can tell, there is no envelope generator node for WebAudio, but you could use .linear​Ramp​ToValue​AtTime() if needed.
NOTE: This is not currently working in Safari.
(see jsfiddle here)
<!DOCTYPE html>
<html>
<head>
<script>
//---------------------------------------------------------------------
// Edit These
var gain = 0.1;
var duration = 1000;
//---------------------------------------------------------------------
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var gainNode = audioCtx.createGain();
//---------------------------------------------------------------------
var oscillator = audioCtx.createOscillator();
oscillator.type = 'square';
oscillator.frequency.value = 500;
//---------------------------------------------------------------------
oscillator.connect(gainNode);
gainNode.connect(audioCtx.destination);
//---------------------------------------------------------------------
gainNode.gain.setValueAtTime(0.0, audioCtx.currentTime); // turned off by default
oscillator.start(audioCtx.currentTime);
//---------------------------------------------------------------------
</script>
</head>
<!-- ===================================================================== -->
<body>
<div>
<button onclick="soundOn()">
Click me
</button>
</div>
<script>
function mute()
{
gainNode.gain.setValueAtTime(0.0, audioCtx.currentTime);
}
function soundOn()
{
gainNode.gain.setValueAtTime(gain, audioCtx.currentTime);
setTimeout(mute,duration);
}
</script>
</body>
<!-- ===================================================================== -->
</html>
Further Resources
If you are struggling with interacting with audio context in this was, I would suggest trying out a library such as p5.js and the p5.sound library.
Look at the https://p5js.org/reference/#/p5.Oscillator to see whether it is a more intuitive approach.

Related

Glitches when using webAudio API with javascript for browser

I am having issues with using the webAudio API with javascript.
The problem is that I am hearing glitches on the sounds being played in my browser even though I have used a gainNode to gradually increase/decrease the sound when it starts/stops.
The audio file is simply 60 seconds of 400hz tone to demonstrate the issue. In the demo I play a snippet from time point 2.0 seconds for 1 second duration, within this duration I ramp up for 100ms and at 800ms I begin to ramp down for 199ms. This is an attempt to avoid a non zero crossing glitch. I use gainNode.gain.setTargetAtTime() but also tried exponentialRampToValueAtTime() as well. In this example I repeat at time point 52 seconds.
At the beginning of the code I impliment an audioContext.resume() to trigger the audio facility of the browser.
<!DOCTYPE html>
<html>
<head>
<title>My experiment</title>
<audio id="audio" src="pure_400Hz_tone.ogg" preload="auto"></audio>
</head>
<body>
<div id="jspsych_target"></div>
<button onclick="dummyPress()">Press to Activate Audio</button>
<button onclick="playTheTones()">sound the tone</button>
</body>
<script>
console.log("setting up audiocontext at ver 28 ");
const audioContext = new AudioContext();
const element = document.querySelector("audio");
const source = audioContext.createMediaElementSource(element);
const gainNode = audioContext.createGain();
gainNode.gain.setValueAtTime(0, audioContext.currentTime);
source.connect(gainNode);
gainNode.connect(audioContext.destination);
function dummyPress(){
audioContext.resume();
playTheTones();
};
function playTheTones(){
// ******* The First Tone ***********
// **********************************
source.currentTime = 2;
gainNode.gain.setTargetAtTime(1.0, audioContext.currentTime, 0.1);
var g = setTimeout(function(){
gainNode.gain.setTargetAtTime(0.0001, audioContext.currentTime, 0.199);
console.log("start Down # " + source.currentTime);
},800);
source.mediaElement.play();
console.log("PLAYING 2 now # " + source.currentTime);
var k = setTimeout(function(){
source.mediaElement.pause();
console.log("STOPPED # " + source.currentTime);
},1100);
// ******* The Second Tone ***********
// **********************************
setTimeout(function(){
source.currentTime = 52;
gainNode.gain.setTargetAtTime(1.0, audioContext.currentTime, 0.1);
var h = setTimeout(function(){
gainNode.gain.setTargetAtTime(0.0001, audioContext.currentTime, 0.199);
console.log("start Down # " + source.currentTime);
},800);
source.mediaElement.play();
console.log("PLAYING 52 now # " + source.currentTime);
var j = setTimeout(function(){
source.mediaElement.pause();
console.log("STOPPED # " + source.currentTime);
},1100);
},1500);
};
</script>
</html>
Unfortunately I think I have confused myself in trying to resolve the glitch issues and may not be using best practice using the API and this might be causing my problem.
Would someone look at the code and point out if I am using the API correctly and confirm that I am correct in thinking I should be able to use the API and present tones in this way without glitching.
Thanks
I found the problem
gainNode.gain.setTargetAtTime(0.0001, audioContext.currentTime, 0.199);
the third parameter is a 'time constant' not a 'time duration' so it was mammothly large at 0.199 and the gain did not diminish rapidly enough so causing the glitch. Setting to 0.01 cures the issue !

cv.Mat is not a constructor opencv

I am getting this bug TypeError: cv.Mat is not a constructor
I tried doing almost everything can't find any solution on internet
Index.html
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Hello OpenCV.js</title>
</head>
<body>
<h2>Hello OpenCV.js</h2>
<p id="status">OpenCV.js is loading...</p>
<div>
<img src="dd.jpg" style="display:none;" id ="img">
<canvas id = "videoInput" height="500" width="500"></canvas>
<canvas id = "canvasOutput" height="500" width="500"></canvas>
<script async type="text/javascript" src="index.js"></script>
<script async src="opencv.js" onload="onOpenCvReady();" type="text/javascript"></script>
</body>
</html>
index.js
document.getElementById('status').innerHTML = 'OpenCV.js is ready.';
let video = document.getElementById('videoInput');
let src = new cv.Mat(video.height, video.width, cv.CV_8UC4);
let dst = new cv.Mat(video.height, video.width, cv.CV_8UC4);
let gray = new cv.Mat();
let cap = new cv.VideoCapture(video);
let faces = new cv.RectVector();
let classifier = new cv.CascadeClassifier();
// load pre-trained classifiers
classifier.load('haarcascade_frontalface_default.xml');
const FPS = 30;
function processVideo() {
try {
if (!streaming) {
// clean and stop.
src.delete();
dst.delete();
gray.delete();
faces.delete();
classifier.delete();
return;
}
let begin = Date.now();
// start processing.
cap.read(src);
src.copyTo(dst);
cv.cvtColor(dst, gray, cv.COLOR_RGBA2GRAY, 0);
// detect faces.
classifier.detectMultiScale(gray, faces, 1.1, 3, 0);
// draw faces.
for (let i = 0; i < faces.size(); ++i) {
let face = faces.get(i);
let point1 = new cv.Point(face.x, face.y);
let point2 = new cv.Point(face.x + face.width, face.y + face.height);
cv.rectangle(dst, point1, point2, [255, 0, 0, 255]);
}
cv.imshow('canvasOutput', dst);
// schedule the next one.
let delay = 1000/FPS - (Date.now() - begin);
setTimeout(processVideo, delay);
} catch (err) {
utils.printError(err);
}
};
// schedule the first one.
setTimeout(processVideo, 0);
}
I am also importing opencv.js as i have a downloaded version of it. Guess theres some initialization problem , please help me solve it ...
opencv.js loads and fires the onload event before it's really initialized. To wait until opencv.js is really ready, opencv.js provides a on hook "onRuntimeInitialized". Use it something like this:
function openCvReady() {
cv['onRuntimeInitialized']=()=>{
// do all your work here
};
}
Make sure that the script is really loaded. If the error is "cv is not defined", then either remove the async in the script tag or add this (or just an onload attribute in the <script> tag)
script.addEventListener('load', () => {
In the WASM build (and only the WASM build), cv will not be immediately available because WASM is compiled in runtime. Assign the start function to cv.onRuntimeInitialized.
Note that the WASM version should be faster; however, it incurs some start overhead (a few CPU seconds). The non-WASM doesn't call onRuntimeInitialized at all.
To check both cases, it's possible to do this
if (cv.getBuildInformation)
{
console.log(cv.getBuildInformation());
onloadCallback();
}
else
{
// WASM
cv['onRuntimeInitialized']=()=>{
console.log(cv.getBuildInformation());
onloadCallback();
}
}
Source:
https://answers.opencv.org/question/207288/how-can-i-compile-opencvjs-so-that-it-does-not-require-defining-the-onruntimeinitialized-field/
https://docs.opencv.org/master/utils.js
I had the exact same issue here. My solution was different to the one suggested. It has more flexibility since your operations will not be limited to a certain method called in the beginning of your code.
Step 1: and it's a very important one, since it actually affected my code:
Make sure there are no unrelated errors on loading the page. It disrupted the loading of opencv.js
Step 2: make sure you load the script synchronously.
<script src="<%= BASE_URL %>static/opencv.js" id="opencvjs"></script>
That was it for me. It worked perfectly from that point.

OpenCV.JS Uncaught ReferenceError: cv is not defined

I am working on a computer vision project using OpneCV.JS and springboot using Thymeleaf as my HTML5 template engine. I have been following the OpenCJ.Js tutorial here. I am suppose to get two output, one that will display the VideoInput and the other one the canvas that will display the Canvaoutput where the face tracking will take place.
However, the Video display and its working as expected. However, the Canvas display did not show. When I inspect my code in the chrome browser, I realize that I am getting an Uncaught Reference error which says CV is not defined.
Can somebody assist to tell me if their is anything I am doing wrong in my Code.
Below is my code
<!DOCTYPE html>
<html xmlns:th="http://www.thymeleaf.org" xmlns:layout="http://www.ultraq.net.nz/web/thymeleaf/layout" layout:decorate="layout">
<head>
<script async type="text/javascript" th:src="#{/opencv/opencv.js}"></script>
<title>Self-Service Portal</title>
</head>
<body>
<h2>Trying OpenCV Javascript Computer Vision</h2>
<p id="status">Loading with OpenCV.js...</p>
<video id="video" autoplay="true" play width="300" height="225"></video> <br/>
<canvas id="canvasOutput" autoplay="true" width="300" height="225"></canvas>
<!--div>
<div class="inputoutput">
<img id="imageSrc" alt="No Image" />
<div class="caption">ImageScr<input type="file" id="fileInput" name="file" /></div>
</div>
<div class="inputoutput">
<canvas id="canvasOutput" ></canvas>
<div class="caption">canvasOutput</div>
</div>
</div-->
<script type="text/javascript">
navigator.mediaDevices.getUserMedia({
video: true,
audio: false
})
.then(function(stream) {
video.srcObject = stream;
video.play();
})
.catch(function(err) {
console.log("An error occured While accessing media! " + err);
});
let video = document.getElementById('video');
let src = new cv.Mat(video.height, video.width, cv.CV_8UC4);
let dst = new cv.Mat(video.height, video.width, cv.CV_8UC4);
let gray = new cv.Mat();
let cap = new cv.VideoCapture(video);
let faces = new cv.RectVector();
let classifier = new cv.CascadeClassifier();
//load pre-trained classifiers
classifier.load('haarcascade_frontalface_default.xml');
const FPS = 30;
function processVideo() {
try {
if (!streaming) {
// clean and stop.
src.delete();
dst.delete();
gray.delete();
faces.delete();
classifier.delete();
return;
}
let begin = Date.now();
// start processing.
cap.read(src);
src.copyTo(dst);
cv.cvtColor(dst, gray, cv.COLOR_RGBA2GRAY, 0);
// detect faces.
classifier.detectMultiScale(gray, faces, 1.1, 3, 0);
// draw faces.
for (let i = 0; i < faces.size(); ++i) {
let face = faces.get(i);
let point1 = new cv.Point(face.x, face.y);
let point2 = new cv.Point(face.x + face.width, face.y + face.height);
cv.rectangle(dst, point1, point2, [255, 0, 0, 255]);
}
cv.imshow('canvasOutput', dst);
// schedule the next one.
let delay = 1000 / FPS - (Date.now() - begin);
setTimeout(processVideo, delay);
}
catch (err) {
utils.printError(err);
}
};
//schedule the first one.
setTimeout(processVideo, 0);
</script>
<!--script async src="/opencv/opencv.js" onload="onOpenCvReady;" type="text/javascript"></script-->
</body>
</html>
The cv is not defined error can be fixed by assigning cv to the window cv object like, let cv = window.cv
Turning off async would not be ideal because the OpenCV JS library is large and it would affect the time to initial load. Maybe assign a state variable that changes when it finishes loading and run a check on this variable and update the UI accordingly
the code you find in the examples on the opencv page usually focuses on the functionality of the library and sometimes they do not have all the necessary functions to work. The examples on the opencv page usually use a utils.js file that contains the additional functions to work and this is not so obvious.
Most likely your problem will be solved with the answer of this forum.
Additionally I have created a codesandbox that contains all the necessary functions for the video examples to work. Normally they will work for you if you replace the code under the comment
/ ** Your example code here * /
by your facial recognition code.
The async on
<script async type="text/javascript" th:src="#{/opencv/opencv.js}"></script>
...means it won't hold up parsing and processing of the following markup while waiting for the file to load; details. So your inline script later in the file can run before that file is loaded and its JavaScript is run.
Remove async and move that script tag to the bottom of the document, just before your inline script that uses what it defines.

Why won't web audio api oscillator play in Safari Mobile?

I have been trying to get an oscillator to play in a mobile browser on IOS (won't work in Chrome or Safari) and I am struggling. From the research I have done I have found that you must create the oscillator (and maybe even the context) on a touch event. What I have working on desktop is an oscillator that connects to a gain node and plays a sound on a mouseenter event on a span element. Then on a mouseout event it disconnects from the gain node so that on the next mouseenter event it will connect again thus being able to create a new sound every time a character is hovered on.
$(".hover-letters span").on("touchend mouseenter", function(){
audioVariables($(this));
});
$(".hover-letters span").on("touchstart mouseout", function(){
oscillator.disconnect(gainNode);
});
function audioVariables(element){
resizeWindowWidth = parseInt($(window).width());
frequency = mouseX/(resizeWindowWidth/600);
type = element.parent().data("type");
sound();
}
var firstLetterInteraction = true;
function sound() {
if (firstLetterInteraction === true){
audioCtx = new(window.AudioContext || window.webkitAudioContext)();
oscillator = audioCtx.createOscillator();
gainNode = audioCtx.createGain();
oscillator.start();
}
oscillator.connect(gainNode);
gainNode.connect(audioCtx.destination);
oscillator.frequency.value = frequency;
oscillator.type = type;
firstLetterInteraction = false;
};
For some reason the sound just won't play on IOS and is not showing any errors. I'm beginning to wonder if this is possible as even examples such as CaptureWiz example here:
How do I make Javascript beep?
and the example given on the Web Audio API site: http://webaudioapi.com/samples/oscillator/ do not work for me on mobile. Anybody have any ideas?
This works for the oscillator
<script>
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var oscillator = audioCtx.createOscillator();
var gainNode = audioCtx.createGain();
oscillator.connect(gainNode);
gainNode.connect(audioCtx.destination);
oscillator.type = 'sine';
oscillator.frequency.value = 300;
gainNode.gain.value = 7.5;
</script>
<button onclick="oscillator.start();">play</button>

web audio api convolver doens't seem to output zeros

I have used web audio api to connect a microphone to a convolver to an analyser to a flot gui to plot the spectrum. For testing I set the buffer of the convolver to be unity but I don't get any output. If I bypass the convolver and connect the mic directly to the analyser it works. Can you please help?
In the code below use_convolver determines if to bypass convolver or not.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta charset="utf-8">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.4/jquery.min.js"></script>
<script src="http://www.flotcharts.org/flot/jquery.flot.js" type="text/javascript"></script>
</head>
<body>
<h1>Audio Spectrum</h1>
<div id="placeholder" style="width:400px; height:200px; display: inline-block;">
</div>
<script>
var microphone;
var analyser;
var convolver;
//user media
navigator.getUserMedia = (navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia);
if (navigator.getUserMedia) {
console.log('getUserMedia supported.');
navigator.getUserMedia(
// constraints - only audio needed for this app
{
audio : true,
echoCancellation : true
},
// Success callback
user_media_setup,
// Error callback
function(err) {
console.log('The following gUM error occured: ' + err);
});
} else {
console.log('getUserMedia not supported on your browser!');
};
function user_media_setup(stream) {
console.log('user media setup');
// set up forked web audio context, for multiple browsers
// window. is needed otherwise Safari explodes
audioCtx = new (window.AudioContext || window.webkitAudioContext)();
//microphone
microphone = audioCtx.createMediaStreamSource(stream);
//analyser
analyser = audioCtx.createAnalyser();
analyser.fftSize = 1024;
analyser.smoothingTimeConstant = 0.85;
//convolver
convolver = audioCtx.createConvolver();
convolver.normalize = true;
convolverBuffer = audioCtx.createBuffer(1, 1, audioCtx.sampleRate);
// convolverBuffer[0] = 1; //wrong
convolverChannel = convolverBuffer.getChannelData(0);
convolverChannel[0] = 1;
convolver.buffer = convolverBuffer;
//connectivity
var use_convolver = false;
if (use_convolver) {
//through convolver:
microphone.connect(convolver);
convolver.connect(analyser);
} else {
//direct:
microphone.connect(analyser);
}
visualize();
}
function visualize() {
console.log('visualize');
dataArray = new Float32Array(analyser.frequencyBinCount);
draw = function() {
analyser.getFloatFrequencyData(dataArray);
var data = [];
for (var i = 0; i < dataArray.length; i++) {
freq = audioCtx.sampleRate * i / dataArray.length / 2;
data.push([freq, dataArray[i]]);
}
var options = {
yaxis : {
min : -200,
max : 0
}
};
$.plot("#placeholder", [data], options);
window.requestAnimationFrame(draw);
};
window.requestAnimationFrame(draw);
}
</script>
</body>
</html>
convolverBuffer[0] is the wrong way to get at the sample data in the buffer. You need to call convolverBuffer.getChannelData(0) to get the sample array to modify.
#aldel This problem was bugging me for a few frustrating days .. much thanks for this tip. I can confirm that this is an issue in firefox as well. It seems if you use a mono WAV file as the buffer for the convolver, you will not get any output from the convolver.
After I switched to a stereo WAV impulse response as the buffer, the convolver worked.
Also a tip I learned today is that firefox's web audio tools (enabled by clicking the gear in the top right section of the firefox dev tools, and checking 'web audio' over on the left) are really useful for visualizing the order of your nodes. And you can easily switch a node on/off (bypass it) to see if that's causing problems in your audio context.

Categories