'source.connect()' not a function - Web Audio API - javascript

I'm pretty new to the Web Audio API and Javascript in general so this may seem really stupid to some of you but I'm in the process of trying to create a basic audio visualiser in a javascript canvas.
I'm having issues with the audio context and more so connecting an analyser to the audio source which is a locally stored mp3 file. 'source.connect()' is apparently not a function, but I've copied the syntax exactly from the Web Audio API guide at: https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API.
function SetUpAudio()
{
let audio = document.createElement('audio');
audio.src = 'never let u go master 3.mp3';
let source = audio.src
audio.controls = 'true';
document.body.appendChild(audio);
audio.style.width = window.innerWidth + 'px';
let audioCtx = new (window.AudioContext)();
let analyser = audioCtx.createAnalyser();
source.connect(analyser);
audio.play();
}
This is the error that appears:
Uncaught TypeError: source.connect is not a function

You mixed up the variable names. The docs have this example:
source = audioCtx.createMediaStreamSource(stream);
source.connect(analyser);
But in your case, source is set to the value of audio.src which is just a string: 'never let u go master 3.mp3', not a MediaStreamSource.
Change it as follows:
audioCtx = new (window.AudioContext)();
let analyser = audioCtx.createAnalyser();
audioCtx.connect(analyser);
Then take it from there. And if you have further issues, please post a new question.

Related

javascript new audioContext mono to Stereo not wordking

I need to convert the mp3 files I play in a project from mono to stereo with the web audio api. But do this a1 = new Audio(/1.mp3); I can't with. My entire system is based on this build. Converting all the sounds playing on the page to stereo or new Audio(/1.mp3); Is there a way to convert a sound created with .
var a1 = new Audio(`/1.mp3`);
a1.volume = .5;
a1.play()
I am using a simple code structure as above.
https://stackoverflow.com/a/54990085/15929287
I couldn't adapt the above answer for myself. In no way can I convert the sound I created with new audio() to stereo. In the example in the link, the oscillator is also added. I'm just trying to do something where I can adjust the mono/stereo setting. I need your help.
The linked answer assumes the mono audio source is an oscillator created with the Web Audio API but it is also possible to use an audio element as the source by using a MediaElementAudioSourceNode.
The audio graph would then need to be wired up like this:
const audio = new Audio('/1.mp3');
const audioCtx = new AudioContext();
const source = audioCtx.createMediaElementSource(audio);
const gainNodeL = audioCtx.createGain();
const gainNodeR = audioCtx.createGain();
const merger = audioCtx.createChannelMerger(2);
source.connect(gainNodeL);
source.connect(gainNodeR);
gainNodeL.connect(merger, 0, 0);
gainNodeR.connect(merger, 0, 1);
merger.connect(audioCtx.destination);
Please note that it's probably still necessary to call resume() on the AudioContext in response to a user gesture to make sure the AudioContext is running.

load a music file into an audio buffer to use with offlinecontext (web audio api)

I want to use the web audio OfflineAudioContext to save some short sound files into a new longer sound file. It seems that the OfflineAudioContext uses an audiobuffer to save the audio. I'm using the audio tag to load my sound files. How can I convert the sound files into an audio buffer that I can pass to the OfflineAudioContext buffer. I don't want to have to use the XMLHttpRequest to create the audio buffer. I wanted to be able to load each sound file then add it to a buffer, and then pass it to the offline context. I've tried several different ways, but the most I've got done is to pass an empty audio buffer to the context which just gives me a file with no sound.
I've tried several features of the web audio context to create a new buffer, but I either get errors, or no sound
<html>
<body>
<audio controls controlsList = "nodownload" id="audio"><source id="audioSource" src="" />
Your browser does not support the audio format.</audio>
<script>
// define online and offline audio context
var audioCtx1 = new AudioContext();
var offlineCtx = new OfflineAudioContext(2,44100*40,44100);
var channels = 2;
// Create an empty two second stereo buffer at the
// sample rate of the AudioContext
var frameCount = audioCtx1.sampleRate * 2.0;
var myArrayBuffer = audioCtx1.createBuffer(2, frameCount, audioCtx1.sampleRate);
source = offlineCtx.createBufferSource();
var array = ['sound1.mp3','sound2.mp3','sound3.mp3']
var audio = document.querySelector('audio');
audio.src = array[0];
source.buffer = myArrayBuffer;
source.connect(offlineCtx.destination);
source.start();
//source.loop = true;
offlineCtx.startRendering();
offlineCtx.oncomplete = function(e) {
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var song = audioCtx.createBufferSource();
song.buffer = e.renderedBuffer;
song.connect(audioCtx.destination);
song.start();
console.log("completed!");
}
</script>
</body>
</html>
I've changed the code several times during testing, I think with this code I posted I just get no sound playing.

WebAudioApi StreamSource

I'd like to use the WebAudioApi with streams. Prelistening is very important and can't be realized when I have to wait for each audio-file to be downloaded.
Downloading the entire audio data is not intended, but the only way it can get it work at the moment:
request.open('GET', src, true);
request.responseType = 'arraybuffer';
request.onload = function() {
var audioData = request.response;
//audioData is the entire downloaded audio-file, which is required by the audioCtx anyway
audioCtx.decodeAudioData(audioData, function(buffer) {
source.buffer = buffer;
source.connect(audioCtx.destination);
source.loop = true;
source.play();
},
function(e){"Error with decoding audio data" + e.err});
}
request.send();
I found a possibility to use a stream, when requesting it from the navigator mediaDevices:
navigator.mediaDevices.getUserMedia ({audio: true, video: true})
.then(function(stream) {
var audioCtx = new AudioContext();
var source = audioCtx.createMediaStreamSource(stream);
source.play();
Is it possible to use the xhr instead of the navigator mediaDevices to get the stream:
//fetch doesn't support a range-header, which would make seeking impossible with a stream (I guess)
fetch(src).then(response => {
const reader = response.body.getReader();
//ReadableStream is not working with createMediaStreamSource
const stream = new ReadableStream({...})
var audioCtx = new AudioContext();
var source = audioCtx.createMediaStreamSource(stream);
source.play();
It doesn't work, because the ReadableStream does not work with createMediaStreamSource.
My first step is realizing the functionality of the html-audio element with seek-functionality. Is there any way to get a xhr-stream and put it into an audioContext?
The final idea is to create an single-track-audio-editor with fades, cutting, prelistening, mixing and export functionality.
EDIT:
Another atempt was to use the html audio and create a SourceNode from it:
var audio = new Audio();
audio.src = src;
var source = audioCtx.createMediaElementSource(audio);
source.connect(audioCtx.destination);
//the source doesn't contain the start method now
//the mediaElement-reference is not handled by the internal Context-Schedular
source.mediaElement.play();
The audio-element supports a stream, but cannot be handled by the context-schedular. This is important in order to create an audio editor with prelistening functionality.
It would be great to reference the standard sourceNode's buffer with the audio-element buffer, but I couldn't find out how to connect them.
I experienced this problem before and have been working on a demo solution below to stream audio in chunks with the Streams API. Seeking is not currently implemented, but it could be derived. Because bypassing decodeAudioData() is currently required, custom decoders must be provided that allow for chunk-based decoding:
https://github.com/AnthumChris/fetch-stream-audio

Creating MediaStream from canvas and video element

I am creating a MediaStream object and adding a video track to it from a canvas using the captureStream() function. This works fine.
However I am trying to add audio as a separate track from a video element. I cant seem to find a way to get an AudioTrack object from a html video element.
Currently HTMLMediaElement.audioTracks is not supported in Chrome. According to the mozilla developer site I should be able to use HTMLMediaElement.captureStream() to return a MediaStream object from which I should be able to retrieve the separate tracks but I just get 'captureStream is not a function' error.
Perhaps i'm missing something very obvious but I would greatly appreciate any help on this.
Below is my current code:
var stream = new MediaStream();
//Works fine for adding video source
var videotracks = myCanvas.captureStream().getTracks();
var videostream = videotracks[0];
stream.addTrack(videostream);
//Currently not supported in Chrome
var audiotracks = myVid.audioTracks;
var audiostream = audiotracks[0];
stream.addTrack(audiostream);
To get an audio stream from a video element in a cross-browser way :
AudioContext API createMediaStreamDestination + createMediaElementSource.
// if all you need is the audio, then you should even probably load your video in an Audio element
var vid = document.createElement('video');
vid.onloadedmetadata = generateAudioStream;
vid.crossOrigin = 'anonymous';
vid.src = 'https://dl.dropboxusercontent.com/s/bch2j17v6ny4ako/movie720p.mp4';
function generateAudioStream() {
var audioCtx = new AudioContext();
// create a stream from our AudioContext
var dest = audioCtx.createMediaStreamDestination();
// connect our video element's output to the stream
var sourceNode = audioCtx.createMediaElementSource(this);
sourceNode.connect(dest)
// start the video
this.play();
// your audio stream
doSomethingWith(dest.stream)
}
function doSomethingWith(audioStream) {
// the audio element that will be shown in the doc
var output = new Audio();
output.srcObject = audioStream;
output.controls = true;
output.play();
document.body.appendChild(output);
}
To add audio to a canvas stream :
MediaStream Capture Canvas and Audio Simultaneously

Web Audio API MediaElementSource node and SoundCloud not working with effects?

I am using a SoundCloud URL as audio.src . It is only playing the unprocessed version when i run it through the delay chain i have.
Here is the fiddle:
http://jsfiddle.net/ehsanziya/nwaH3/
var context = new webkitAudioContext();
var audio = new Audio(); //creates a HTML5 Audio Element
url = 'http://api.soundcloud.com/tracks/33925813/stream' + '?client_id=c625af85886c1a833e8fe3d740af753c';
//wraps the soundcloud stream to an audio element.
audio.src = url;
var source = context.createMediaElementSource(audio);
var input = context.createGainNode();
var output = context.createGainNode();
var fb = context.createGainNode();
fb.gain.value = 0.4;
var delay = context.createDelayNode();
delay.delayTime.value = 0.5;
//dry
source.connect(input);
input.connect(output);
//wet
input.connect(delay);
delay.connect(fb);
fb.connect(delay);
delay.connect(output);
source.mediaElement.play();
The chain works with Oscillator node.
What is the reason for it?
And is there any other way of processing a streaming sound from SoundCloud with Web Audio API?
You need to wait for the canplaythrough event on your audio element to fire before you can use it with createMediaElementSource.
So just add the event listener, and wait until the callback fires before you assign source = context.createMediaElementSource(audio); and make all of your connections.
Here's an updated jsFiddle that'll do what you want: http://jsfiddle.net/nwaH3/3/

Categories