I need to convert the mp3 files I play in a project from mono to stereo with the web audio api. But do this a1 = new Audio(/1.mp3); I can't with. My entire system is based on this build. Converting all the sounds playing on the page to stereo or new Audio(/1.mp3); Is there a way to convert a sound created with .
var a1 = new Audio(`/1.mp3`);
a1.volume = .5;
a1.play()
I am using a simple code structure as above.
https://stackoverflow.com/a/54990085/15929287
I couldn't adapt the above answer for myself. In no way can I convert the sound I created with new audio() to stereo. In the example in the link, the oscillator is also added. I'm just trying to do something where I can adjust the mono/stereo setting. I need your help.
The linked answer assumes the mono audio source is an oscillator created with the Web Audio API but it is also possible to use an audio element as the source by using a MediaElementAudioSourceNode.
The audio graph would then need to be wired up like this:
const audio = new Audio('/1.mp3');
const audioCtx = new AudioContext();
const source = audioCtx.createMediaElementSource(audio);
const gainNodeL = audioCtx.createGain();
const gainNodeR = audioCtx.createGain();
const merger = audioCtx.createChannelMerger(2);
source.connect(gainNodeL);
source.connect(gainNodeR);
gainNodeL.connect(merger, 0, 0);
gainNodeR.connect(merger, 0, 1);
merger.connect(audioCtx.destination);
Please note that it's probably still necessary to call resume() on the AudioContext in response to a user gesture to make sure the AudioContext is running.
Related
I'm developing a game using javascript and other web technologies. In it, there's a game mode that is basically a tower defense, in which multiple objects may need to make use of the same audio file(.ogg) at the same time. Loading a file and creating a new webaudio for each one of those lags it too much, even if I attempt to stream it instead of a simple sync read, and if I create and save a webaudio in a variable to use multiple times, each time its playing and there is a new request to play said audio, the one that was playing will stop to allow for the new one to play(so, with enough of those, nothing plays at all).
With those issues, I decided to make copies of said webaudio object each time it was gonna be played, but its not only slow to do so, but also creates a minor memory leak(at least the way I did it).
How can I properly cache a webaudio for re-use? Consider that I'm pretty sure I'll need a new one each time because each audio has a position, and thus each of them will play differently, based on player position in relation to object that is playing the audio
You tagged your question with web-audio-api, but from the body of this question, it seems you are using an HTMLMediaElement <audio> instead of the Web Audio API.
So I'll invite you to do the transition to that Web Audio API.
From there you'll be able to decode once your audio file, keep only once the decoded data as an AudioBuffer, and create many readers that will all hook to that one and only AudioBuffer, without eating any more memory.
const btn = document.querySelector("button")
const context = new AudioContext();
// a GainNode to control the output volume of our audio
const volumeNode = context.createGain();
volumeNode.gain.value = 0.5; // from 0 to 1
volumeNode.connect(context.destination);
fetch("https://dl.dropboxusercontent.com/s/agepbh2agnduknz/camera.mp3")
// get the resource as an ArrayBuffer
.then((resp) => resp.arrayBuffer())
// decode the Audio data from this resource
.then((buffer) => context.decodeAudioData(buffer))
// now we have our AudioBuffer object, ready to be played
.then((audioBuffer) => {
btn.onclick = (evt) => {
// allowing an AudioContext to make noise
// must be required from an user-gesture
if (context.status === "suspended") {
context.resume();
}
// a very light player object
const source = context.createBufferSource();
// a simple pointer to the big AudioBuffer (no copy)
source.buffer = audioBuffer;
// connect to our volume node, itself connected to audio output
source.connect(volumeNode);
// start playing now
source.start(0);
};
// now you can spam the button!
btn.disabled = false;
})
.catch(console.error);
<button disabled>play</button>
I am currently working on a simple audio project which uses HTML / JS audio elements to play audio using buttons.
Is it somehow possible to record that audio being played instead of recording the microphone (by using the MediaRecorder API)?
After following the steps provided by lawrence-witt, I figured it out. Here an example for everyone trying to achieve the same:
const audioContext = new AudioContext();
const destination = audioContext.createMediaStreamDestination();
const mediaRecorder = new MediaRecorder(destination.stream);
// array of 'new Audio()' elements
audioArray.forEach(audio => {
let stream = audioContext.createMediaStreamSource(audio.captureStream());
stream.connect(destination);
})
mediaRecorder.start();
I would like to create a Web Audio panner to position the sound from a WebRTC stream.
I have the stream connecting OK and can hear the audio and see the video, but the panner does not have any effect on the audio (changing panner.setPosition(10000, 0, 0) to + or - 10000 makes no difference to the sound).
This is the onaddstream function where the audio and video get piped into a video element and where I presume i need to add the panner.
There are no errors, it just isn't panning at all.
What am I doing wrong?
Thanks!
peer_connection.onaddstream = function(event) {
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioCtx = new AudioContext();
audioCtx.listener.setOrientation(0,0,-1,0,1,0)
var panner = audioCtx.createPanner();
panner.panningModel = 'HRTF';
panner.distanceModel = 'inverse';
panner.refDistance = 1;
panner.maxDistance = 10000;
panner.rolloffFactor = 1;
panner.coneInnerAngle = 360;
panner.coneOuterAngle = 0;
panner.coneOuterGain = 0;
panner.setPosition(10000, 0, 0); //this doesn't do anything
peerInput.connect(panner);
panner.connect(audioCtx.destination);
// attach the stream to the document element
var remote_media = USE_VIDEO ? $("<video>") : $("<audio>");
remote_media.attr("autoplay", "autoplay");
if (MUTE_AUDIO_BY_DEFAULT) {
remote_media.attr("muted", "false");
}
remote_media.attr("controls", "");
peer_media_elements[peer_id] = remote_media;
$('body').append(remote_media);
attachMediaStream(remote_media[0], event.stream);
}
Try to get the event stream before setting the panner
var source = audioCtx.createMediaStreamSource(event.stream);
Reference: Mozilla Developer Network - AudioContext
CreatePaneer Refernce: Mozilla Developer Network - createPanner
3rd Party Library: wavesurfer.js
Remove all the options you've set for the panner node and see if that helps. (The cone angles seem a little funny to me, but I always forget how they work.)
If that doesn't work, create a smaller test with the panner but use a simple oscillator as the input. Play around with the parameters and positions to make sure it does what you want.
Put this back into your app. Things should work then.
Figured this out for myself.
The problems was not the code, it was because I was connected with Bluetooth audio.
Bluetooth apparently can only do stereo audio with the microphone turned off. As soon as you activate the mic, that steals one of the channels and audio output downgrades to mono.
If you have mono audio, you definitely cannot do 3D positioned sound, hence me thinking the code was not working.
I have a simple synth that plays a note for some length of time:
// Creating audio graph
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var oscillator = audioCtx.createOscillator();
var gainNode = audioCtx.createGain();
oscillator.connect(gainNode);
gainNode.connect(audioCtx.destination);
// Setting parameters
oscillator.type = "sine";
oscillator.frequency.value = 2500;
// Run audio graph
var currentTime = offlineCtx.currentTime;
oscillator.start(currentTime);
oscillator.stop(currentTime + 1);
How can I get the PCM data of the sound the synthesiser makes? I've managed to do this with audio samples by using decodeAudioData, but I can't find an equivalent for an audio graph that isn't based on loading a sample.
I specifically want to render the audio graph with the OfflineAudioContext since I only care about retrieving the PCM data as fast as possible.
Thanks!
You say you want to use an offline context and then you don't actually use an offline context. So you should do
var offlineCtx = new OfflineAudioContext(nc, length, rate)
where nc = number of channels, length is the number of samples, and rate is the sample rate you want to use.
Create your graph, start everything and then do
offlineCtx.startRendering().then(function (buffer) {
// buffer has the PCM data you want. Save it somewhere,
// or whatever
})
(I'm not sure all browsers support promises from an offline context. If not, use offlineCtx.oncomplete to get the data. See the spec.)
Eventually I found an answer here: http://www.pp4s.co.uk/main/tu-sms-audio-recording.html#co-tu-sms-audio-recording__js but you will not like it. Apparently, the Audio API is not yes standardized enough for this to work on all browsers. So I have been able to run the code above in Firefox, but not Chrome.
Basic ideas:
use dest = ac.createMediaStreamDestination(); to get a destination
of the sound
use new MediaRecorder(dest.stream); to get a recorder
use the MediaRecorder ondataavailable and stop events to get the data and combine it into a Blob
I have a strange Problem. I'm using Web Audio to play a stream from the server. I do that the following way:
var d2 = new DataView(evt.data);
var data = new Float32Array(d2.byteLength / Float32Array.BYTES_PER_ELEMENT);
for (var jj = 0; jj < data.length; ++jj) {
data[jj] = d2.getFloat32(jj * Float32Array.BYTES_PER_ELEMENT, true);
}
var buffer = context.createBuffer(1, data.length, 44100);
buffer.getChannelData(0).set(data);
source = context.createBufferSource();
source.buffer = buffer;
source.start(startTime);
source.connect(context.destination);
startTime += buffer.duration;
This works fine.
If i play the stream on my Computer i don't have any problems.
If i play the same stream on my Windows 8 tablet (same Chrome version) i have a lot of clicking sounds in the audio. There are multiple of them within one second.
It kinda seams that on the end of each buffer i hear a click.
I don't understand the difference... The only difference i could find was that the samplingrate of the soundcard on my computer is 44100 and on the tablet it's 48000.
The transmitted stream is in 44100 and i don't have any samplerate problems. just the clicking sounds.
Does anybody have an idea why this is happening?
Thank you,
metabolic
AudioBufferSourceNode resample their buffers to the AudioContext samplerate. As you can imagine, the API does not allow you to keep the resampler state between one AudioBufferSourceNode and the other, so there is a discontinuity between the two buffers.
I think the easiest way is to provide a stream at the sample-rate of the device, by resampling server-side. When the AudioWorkerNode will be ready and implemented, you'll be able to fix this yourself as well client side, but it's not.
Alternatively also you can just stream using an element, and pipe that to Web Audio API using AudioContext.createMediaElementSource().
I had the same issue, thanks to Padenot's answer I checked the sample rates. AudioContext.sampleRate defaulted to 44100, but the PCM data and AudioBuffer was 48000. Initialising the AudioContext with a matching sampleRate solved the problem:
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioCtx = new AudioContext({
latencyHint: 'interactive',
sampleRate: 48000,
});
With this, I can schedule the playback of 20ms 48khz PCM16 AudioBuffers back-to-back without any clicks or distortion.