I was building an audio program and hit a stumbling block on the .createMediaElementSource method. I was able to solve the problem, but I do not quite know why the solution works.
In my HTML, I created an audio player: <audio id="myAudio><source src="music.mp3"></audio>
Now in my JS:
context = new AudioContext();
audio = document.getElementById('myAudio');
source = context.createMediaElementSource(audio);
audio.play();
doesn't work. The audio element loads, but doesn't play the song, nor is there audio.
However! This JS code works:
context = ...; //same as above
audio...;
source = context.createMediaElementSource(audio[0]);
audio.play();
All I changed was adding a [0] to the audio and the program suddenly works again. Since .getElementById doesn't return an array, I don't know why referring to audio as an array works, but just referring to audio does not.
A few months late, but in case others stumble upon this and want an answer:
This behaviour is described in the Web Audio API spec:
The createMediaElementSource method
Creates a MediaElementAudioSourceNode given an HTMLMediaElement. As a consequence of calling this method, audio playback from the HTMLMediaElement will be re-routed into the processing graph of the AudioContext.
Emphasis mine. Since the output from the audio element is now routed into the newly created MediaElementAudioSourceNode instance (instead of the original destination, usually your speakers), you need to route the output of the instance back to the original destination:
var audio = document.getElementById('myAudio');
var ctx = new AudioContext();
var src = ctx.createMediaElementSource(audio);
src.connect(ctx.destination); // connect the output of the source to your speakers
audio.play();
The reason it worked when you added [0] is that document.getElementById doesn't return an array, or an element with a defined key of "0". As such, you might as well have written ctx.createMediaElementSource(undefined), which doesn't re-route the audio from the #myAudio element.
Related
im trying to create a video conference web app..
the problem is im trying to disable my camera on the middle conference, its work but my laptop camera indicator still on (the light is on) but on my web, video show blank screen, is that normal or i miss something?
here what i try
videoAction() {
navigator.mediaDevices.getUserMedia({
video: true,
audio: true
}).then(stream => {
this.myStream = stream
})
this.myStream.getVideoTracks()[0].enabled = !(this.myStream.getVideoTracks()[0].enabled)
this.mediaStatus.video = this.myStream.getVideoTracks()[0].enabled
}
There is also a stop() method which should do the trick in Chrome and Safari. Firefox should already mark the camera as unused by setting the enabled property.
this.myStream.getVideoTracks()[0].stop();
Firstly, MediaStreamTrack.enabled is a Boolean, so you can simply assign the value false.
To simplify your code, you might call:
var vidTrack = myStream.getVideoTracks();
vidTrack.forEach(track => track.enabled = false);
When MediaStreamTrack.enabled = false, the track passes empty frames to the stream, which is why black video is sent. The camera/source itself is not stopped-- I believe the webcam light will turn off on Mac devices, but perhaps not on Windows etc.
.stop(), on the other hand, completely stops the track and tells the source it is no longer needed. If the source is only connected to this track, the source itself will completely stop. Calling .stop() will definitely turn off the camera and webcam light, but you won't be able to turn it back on in your stream instance (since its video track was destroyed). Therefore, completely turning off the camera is not what you want to do; just stick to .enabled = false to temporarily disable video and .enabled = true to turn it back on.
We need to assign window.localStream = stream; inside navigator.mediaDevices.getUserMedia method
For Stop Webcam and LED light off.
localStream.getVideoTracks()[0].stop();
video.src = '';
Suppose you use the Web Audio API to play a pure tone:
ctx = new AudioContext();
src = ctx.createOscillator();
src.frequency = 261.63; //Play middle C
src.connect(ctx.destination);
src.start();
But, later on you decide you want to stop the sound:
src.stop();
From this point on, src is now completely useless; if you try to start it again, you get:
src.start()
VM564:1 Uncaught DOMException: Failed to execute 'start' on 'AudioScheduledSourceNode': cannot call start more than once.
at <anonymous>:1:5
If you were making, say, a little online keyboard, you're constantly turning notes on and off. It seems really clunky to remove the old object from the audio nodes graph, create a brand new object, and connect() it into the graph, (and then discard the object later) when it would be simpler to just turn it on and off when needed.
Is there some important reason the Web Audio API does things like this? Or is there some cleaner way of restarting an audio source?
Use connect() and disconnect(). You can then change the values of any AudioNode to change the sound.
(The button is because AudioContext requires a user action to run in Snippet.)
play = () => {
d.addEventListener('mouseover',()=>src.connect(ctx.destination));
d.addEventListener('mouseout',()=>src.disconnect(ctx.destination));
ctx = new AudioContext();
src = ctx.createOscillator();
src.frequency = 261.63; //Play middle C
src.start();
}
div {
height:32px;
width:32px;
background-color:red
}
div:hover {
background-color:green
}
<button onclick='play();this.disabled=true;'>play</button>
<div id='d'></div>
This is exactly how the web audio api works. Sound generator nodes like oscillator nodes and audio buffer source nodes are intended to be used once. Every time you want to play your oscillator, you have to create it and set it up, just like you said. I know it seems like a hassle, but you can abstract it into a play() method that handles those details for you, so you don't have to think about it every time you play an oscillator. Also, don't worry about the performance implications of creating so many nodes. The web audio api is intended to be used this way.
If you just want to make music on the internet, and you're not as interested in learning the ins and outs of the web audio api, you might be interested in using a library I wrote that makes things like this easier: https://github.com/rserota/wad
I am working on a 12 Voice Polyphonic Syntesizer with 2 Osc per Voice.
I now never Stop the Osc's. I disconnect the Osc's. You can do that by setTimeout. For the Time take the longest release Phase (1 of 2) from the amp Enveloop for this Set of Osc's. Subtract the AudioContext.currentTime(), multiply with 1000 (setTimeout works with milisecs, web Audio works with seconds.)
As stated in the title, I have been running in an issue regarding the HTMLVideoElement when connected to the WebAudioAPI inside Firefox.
The following sample gives a minimal example reproducing the issue:
var video = document.getElementById('video');
var ctx = new AudioContext();
var sourceNode = ctx.createMediaElementSource(video);
sourceNode.connect(ctx.destination);
video.playbackRate = 3;
video.play();
As soon as the video element is connected to the audio pipeline, I cannot get the playbackRate setter to work anymore.
I've been looking for a way to set this value somewhere inside the AudioContext or the HTMLMediaElementSourceNode objects but those classes do not seem to handle playback-rate on their own.
Please note that this sample works fine on Chrome. And I don't really see what seems to be the problem here.
Thanks
Already reported over the Firefox's bug tracker: https://bugzilla.mozilla.org/show_bug.cgi?id=966247
lets say i wan't to have an app that has variable audio sources as audio tags like so:
<audio preload="auto" src="1.mp3" controls="" class="muzz"></audio>
<audio preload="auto" src="track.mp3" controls="" class="muzz"></audio>
Depending on which of them is played it should be passed to createMediaElementSource and then the sound would be sent to analyser and various things would be done with it, but it doesn't work:
var trackName;
//get the source of clicked track
$(".muzz").on("play", function(){
trackName = $(this).attr("src");
console.log("got a source: ", trackName);
audio = new Audio();
audio.src=trackName;
context = new AudioContext();
analyser = context.createAnalyser();
source = context.createMediaElementSource(audio);
source.connect(analyser);
analyser.connect(context.destination);
letsDraw();
});
the console log displays the correct source name, the letsDraw() method is supposed to draw a spectrogram of the audio playing:
function letsDraw(){
console.log("draw called");
window.requestAnimationFrame(letsDraw);
fbc_array = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(fbc_array); //get frequency from the analyser node
ctx.clearRect(0, 0, canvas.width, canvas.height);
ctx.fillStyle="white";
ctx.font = "bold 12px Arial";
ctx.fillText("currently playing:" + trackName, 10, 20);//this works
bars = 150;
for(var i = 0; i < analyser.frequencyBinCount; i++){ //but this doesn't
/*fill the canvas*/
x = i *2;
barWidth = 1;
barHeight = -(fbc_array[i]/1.8);
//colours react to the frequency loudness
hue = parseInt(500 * (1 - (barHeight / 200)), 10);
ctx.fillStyle = 'hsl(' + hue + ',75%,50%)';
ctx.fillRect(x, canvas.height, barWidth, barHeight);
}
}
it was working fine with one set audio source, but fails with variable sources, any ideas would be very appreciated.
of course, no errors are even thrown in the console at all.
What I don't get is why you take the src and put that in a new Audio object, as you already have them.
It is also way better to create a source from both <audio> tags. You just create a function that runs on page load (so when everything on the page is ready so you won't get any errors about elements not yet existing etc.).
Before I start writing a piece of code, what do you expect to happen? Should it be possible to have both tags playing at the same time, or should one be stopped if you click play on the other? If it shouldn't play at the same time, you'd better make one <audio> tag and create two buttons which each set the src of the tag.
Another problem with your code is that you already have the <audio> elements, and when you want them to play you just create a new audio element and append the src to it.. What is the logic behind that?
EDIT:
Here is an example of using multiple sources with only one <audio> element.
The HTML code should look like this:
<audio id="player" src="" autoplay="" controls=""></audio>
<div id="buttons">
<!--The javascript code will generate buttons with which you can play the audio-->
</div>
Then you use this JS code:
onload = function () { //this will be executed when the page is ready
window.audioFiles = ['track1.mp3', 'track2.mp3']; //this is gonna be the array with all file names
window.player = document.getElementById('player');
window.AudioContext = window.AudioContext || window.webkitAudioContext;
context = new AudioContext();
source = context.createMediaElementSource(player);
analyser = context.createAnalyser();
source.connect(analyser);
analyser.connect(context.destination);
//now we take all the files and create a button for every file
for (var x in audioFiles) {
var btn = document.createElement('button');
btn.innerHTML = audioFiles[x];
btn.onclick = function () {
player.src = audioFiles[x];
//so when the user clicks the button, the new source gets appended to the audio element
}
document.getElementById('buttons').appendChild(btn);
}
}
Hope the comments explain it good enough
EDIT2:
You want to know how to do this for multiple elements. What you want to do is to create all the audio elements when the page loads, and create all sources for it. This will decrease mess when starting to play audio. The code is here.
All you need to do is have the for loop that runs for every media file you have, which it will create an audio element for with the appropriate source, and then create a sourcenode for that (createMediaElementSource), and connect that sourcenode to the analyser.
I also want to say something about your visualiser code though. If you do not override the font, color or anything, you don't need to execute that every animationframe. Once on init is enough.
At the top of your first code block, try this:
var trackName,
context = new AudioContext();
And remove context = new AudioContext(); from the click handler.
You should only have one AudioContext for the page.
I have an oscillator to generate the frequencies of a keyboard. It all works when I output to speakers, but as well as outputting to speakers I would like to buffer it so that I can turn it into base 64 and use again later. The only examples of this I have seen use xhr which I do not need as obviously I want to be able to just add a node into the modular routing to take input, store it in an array, then output to the hardware.
Something like this:
var osc = ctx.createOscillator();
osc.type = 3;
osc.frequency.value = freq;
osc.connect(buffer);
buffer.connect(ctx.destination);
Is this possible?
Have you considered utilizing a ScriptProcessorNode?
See: http://www.w3.org/TR/webaudio/#ScriptProcessorNode
You would attach an eventListener to this node, allowing you to capture arrays of audio samples as they pass through. You could then save these buffers and manipulate them as you wish.
Have you checked out RecorderJs? https://github.com/mattdiamond/Recorderjs. I think it does what you need.
I have solved my problem by using Matt's Recorder.js https://github.com/mattdiamond/Recorderjs and connecting it to a GainNode which acts as an intermediary from a number of oscillators to the ctx.destination. I will be using localStorage but here is an example using an array (this does not include the oscillator setup).
var recorder;
recorder = new Recorder(gainNode, { workerPath: "../recorderWorker.js"});
recorder.record();
var recordedSound = [];
function recordSound() {
recorder.exportWAV(function(blob) {
recordedSound.push(blob);
});
}
function play(i) {
var audio = new Audio(window.URL.createObjectURL(recordedSound[i]));
audio.play();
}