Clicking sounds in Stream played with Web Audio Api - javascript

I have a strange Problem. I'm using Web Audio to play a stream from the server. I do that the following way:
var d2 = new DataView(evt.data);
var data = new Float32Array(d2.byteLength / Float32Array.BYTES_PER_ELEMENT);
for (var jj = 0; jj < data.length; ++jj) {
data[jj] = d2.getFloat32(jj * Float32Array.BYTES_PER_ELEMENT, true);
}
var buffer = context.createBuffer(1, data.length, 44100);
buffer.getChannelData(0).set(data);
source = context.createBufferSource();
source.buffer = buffer;
source.start(startTime);
source.connect(context.destination);
startTime += buffer.duration;
This works fine.
If i play the stream on my Computer i don't have any problems.
If i play the same stream on my Windows 8 tablet (same Chrome version) i have a lot of clicking sounds in the audio. There are multiple of them within one second.
It kinda seams that on the end of each buffer i hear a click.
I don't understand the difference... The only difference i could find was that the samplingrate of the soundcard on my computer is 44100 and on the tablet it's 48000.
The transmitted stream is in 44100 and i don't have any samplerate problems. just the clicking sounds.
Does anybody have an idea why this is happening?
Thank you,
metabolic

AudioBufferSourceNode resample their buffers to the AudioContext samplerate. As you can imagine, the API does not allow you to keep the resampler state between one AudioBufferSourceNode and the other, so there is a discontinuity between the two buffers.
I think the easiest way is to provide a stream at the sample-rate of the device, by resampling server-side. When the AudioWorkerNode will be ready and implemented, you'll be able to fix this yourself as well client side, but it's not.
Alternatively also you can just stream using an element, and pipe that to Web Audio API using AudioContext.createMediaElementSource().

I had the same issue, thanks to Padenot's answer I checked the sample rates. AudioContext.sampleRate defaulted to 44100, but the PCM data and AudioBuffer was 48000. Initialising the AudioContext with a matching sampleRate solved the problem:
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioCtx = new AudioContext({
latencyHint: 'interactive',
sampleRate: 48000,
});
With this, I can schedule the playback of 20ms 48khz PCM16 AudioBuffers back-to-back without any clicks or distortion.

Related

javascript new audioContext mono to Stereo not wordking

I need to convert the mp3 files I play in a project from mono to stereo with the web audio api. But do this a1 = new Audio(/1.mp3); I can't with. My entire system is based on this build. Converting all the sounds playing on the page to stereo or new Audio(/1.mp3); Is there a way to convert a sound created with .
var a1 = new Audio(`/1.mp3`);
a1.volume = .5;
a1.play()
I am using a simple code structure as above.
https://stackoverflow.com/a/54990085/15929287
I couldn't adapt the above answer for myself. In no way can I convert the sound I created with new audio() to stereo. In the example in the link, the oscillator is also added. I'm just trying to do something where I can adjust the mono/stereo setting. I need your help.
The linked answer assumes the mono audio source is an oscillator created with the Web Audio API but it is also possible to use an audio element as the source by using a MediaElementAudioSourceNode.
The audio graph would then need to be wired up like this:
const audio = new Audio('/1.mp3');
const audioCtx = new AudioContext();
const source = audioCtx.createMediaElementSource(audio);
const gainNodeL = audioCtx.createGain();
const gainNodeR = audioCtx.createGain();
const merger = audioCtx.createChannelMerger(2);
source.connect(gainNodeL);
source.connect(gainNodeR);
gainNodeL.connect(merger, 0, 0);
gainNodeR.connect(merger, 0, 1);
merger.connect(audioCtx.destination);
Please note that it's probably still necessary to call resume() on the AudioContext in response to a user gesture to make sure the AudioContext is running.

JavaScript MediaSource and MediaRecorder lag in playing live-stream video

I have working on streaming live video using WebRTC based on RTCConnection with library called simple-peer, but I have faced with some lag between live stream video (with MediaRecorder) and that was played on using MediaSource
Here is recorder:
var mediaRecorder = new MediaRecorder(stream, options);
mediaRecorder.ondataavailable = handleDataAvailable;
function handleDataAvailable(event) {
if (connected && event.data.size > 0) {
peer.send(event.data);
}
}
...
peer.on('connect', () => {
// wait for 'connect' event before using the data channel
mediaRecorder.start(1);
});
Here is source that is played:
var mediaSource = new MediaSource();
var sourceBuffer;
mediaSource.addEventListener('sourceopen', args => {
sourceBuffer = mediaSource.addSourceBuffer(mimeCodec);
});
...
peer.on('data', data => {
// got a data channel message
sourceBuffer.appendBuffer(data);
});
I open two tabs and connect to myself and I see delay in playing video ...
Seems like I configured badly MediaRecorder or MediaSource
Any help will be appreciated ;)
You've combined two completely unrelated techniques for streaming the video, and are getting the worst tradeoffs of both. :-)
WebRTC has media stream handling built into it. If you expect realtime video, the WebRTC stack is what you want to use. It handles codec negotiation, auto-scales bandwidth, frame size, frame rate, and encoding parameters to match network conditions, and will outright drop chunks of time to keep playback as realtime as possible.
On the other hand, if retaining quality is more desirable than being realtime, MediaRecorder is what you would use. It makes no adjustments based on network conditions because it is unaware of those conditions. MediaRecorder doesn't know or care where you put the data after it gives you the buffers.
If you try to play back video as it's being recorded, will inevitably lag further and further behind because there is no built-in catch-up method. The only thing that can happen is a buffer underrun, where the playback side waits until there is enough data to begin playback again. Even if it becomes minutes behind, it isn't going to automatically skip ahead.
The solution is to use the right tool. It sounds like from your question that you want realtime video. Therefore, you need to use WebRTC. Fortunately simple-peer makes this... simple.
On the recording side:
const peer = new Peer({
initiator: true,
stream
});
Then on the playback side:
peer.on('stream', (stream) => {
videoEl.srcObject = stream;
});
Much simpler. The WebRTC stack handles everything for you.

Adding panner / spacial audio to Web Audio Context from a WebRTC stream not working

I would like to create a Web Audio panner to position the sound from a WebRTC stream.
I have the stream connecting OK and can hear the audio and see the video, but the panner does not have any effect on the audio (changing panner.setPosition(10000, 0, 0) to + or - 10000 makes no difference to the sound).
This is the onaddstream function where the audio and video get piped into a video element and where I presume i need to add the panner.
There are no errors, it just isn't panning at all.
What am I doing wrong?
Thanks!
peer_connection.onaddstream = function(event) {
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioCtx = new AudioContext();
audioCtx.listener.setOrientation(0,0,-1,0,1,0)
var panner = audioCtx.createPanner();
panner.panningModel = 'HRTF';
panner.distanceModel = 'inverse';
panner.refDistance = 1;
panner.maxDistance = 10000;
panner.rolloffFactor = 1;
panner.coneInnerAngle = 360;
panner.coneOuterAngle = 0;
panner.coneOuterGain = 0;
panner.setPosition(10000, 0, 0); //this doesn't do anything
peerInput.connect(panner);
panner.connect(audioCtx.destination);
// attach the stream to the document element
var remote_media = USE_VIDEO ? $("<video>") : $("<audio>");
remote_media.attr("autoplay", "autoplay");
if (MUTE_AUDIO_BY_DEFAULT) {
remote_media.attr("muted", "false");
}
remote_media.attr("controls", "");
peer_media_elements[peer_id] = remote_media;
$('body').append(remote_media);
attachMediaStream(remote_media[0], event.stream);
}
Try to get the event stream before setting the panner
var source = audioCtx.createMediaStreamSource(event.stream);
Reference: Mozilla Developer Network - AudioContext
CreatePaneer Refernce: Mozilla Developer Network - createPanner
3rd Party Library: wavesurfer.js
Remove all the options you've set for the panner node and see if that helps. (The cone angles seem a little funny to me, but I always forget how they work.)
If that doesn't work, create a smaller test with the panner but use a simple oscillator as the input. Play around with the parameters and positions to make sure it does what you want.
Put this back into your app. Things should work then.
Figured this out for myself.
The problems was not the code, it was because I was connected with Bluetooth audio.
Bluetooth apparently can only do stereo audio with the microphone turned off. As soon as you activate the mic, that steals one of the channels and audio output downgrades to mono.
If you have mono audio, you definitely cannot do 3D positioned sound, hence me thinking the code was not working.

How to render the audio from a synth to a buffer (array of PCM values) with the Web Audio API

I have a simple synth that plays a note for some length of time:
// Creating audio graph
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var oscillator = audioCtx.createOscillator();
var gainNode = audioCtx.createGain();
oscillator.connect(gainNode);
gainNode.connect(audioCtx.destination);
// Setting parameters
oscillator.type = "sine";
oscillator.frequency.value = 2500;
// Run audio graph
var currentTime = offlineCtx.currentTime;
oscillator.start(currentTime);
oscillator.stop(currentTime + 1);
How can I get the PCM data of the sound the synthesiser makes? I've managed to do this with audio samples by using decodeAudioData, but I can't find an equivalent for an audio graph that isn't based on loading a sample.
I specifically want to render the audio graph with the OfflineAudioContext since I only care about retrieving the PCM data as fast as possible.
Thanks!
You say you want to use an offline context and then you don't actually use an offline context. So you should do
var offlineCtx = new OfflineAudioContext(nc, length, rate)
where nc = number of channels, length is the number of samples, and rate is the sample rate you want to use.
Create your graph, start everything and then do
offlineCtx.startRendering().then(function (buffer) {
// buffer has the PCM data you want. Save it somewhere,
// or whatever
})
(I'm not sure all browsers support promises from an offline context. If not, use offlineCtx.oncomplete to get the data. See the spec.)
Eventually I found an answer here: http://www.pp4s.co.uk/main/tu-sms-audio-recording.html#co-tu-sms-audio-recording__js but you will not like it. Apparently, the Audio API is not yes standardized enough for this to work on all browsers. So I have been able to run the code above in Firefox, but not Chrome.
Basic ideas:
use dest = ac.createMediaStreamDestination(); to get a destination
of the sound
use new MediaRecorder(dest.stream); to get a recorder
use the MediaRecorder ondataavailable and stop events to get the data and combine it into a Blob

Change sample rate of AudioContext (getUserMedia)

Im trying to record a 48000Hz recording via getUserMedia. But without luck. The returned audio MediaStream returns 44100Hz. How can i set this to 48000Hz?
Here are snippets of my code:
var startUsermedia = this.startUsermedia;
navigator.getUserMedia({
audio: true,
//sampleRate: 48000
}, startUsermedia, function (e) {
console.log('No live audio input: ' + e);
});
The startUsermedia function:
startUsermedia: function (stream) {
var input = audio_context.createMediaStreamSource(stream);
console.log('Media stream created.');
// Uncomment if you want the audio to feedback directly
//input.connect(audio_context.destination);
//__log('Input connected to audio context destination.');
recorder = new Recorder(input);
console.log('Recorder initialised.');
},
I tried changing the property sampleRate of the AudioContext, but no luck.
How can i change the sampleRate to 48000Hz?
EDIT : We are also now okay with a flash solution that can record and export wav files at 48000Hz
As far as I know, there is no way to change the sample rate within an audio context. The sample rate will usually be the sample rate of your recording device and will stay that way. So you will not be able to write something like this:
var input = audio_context.createMediaStreamSource(stream);
var resampler = new Resampler(44100, 48000);
input.connect(resampler);
resampler.connect(audio_context.destination);
However, if you want to take your audio stream, resample it and then send it to the backend (or do sth. else with it outside of the Web Audio API), you can use an external sample rate converter (e.g. https://github.com/taisel/XAudioJS/blob/master/resampler.js).
var resampler = new Resampler(44100, 48000, 1, 2229);
function startUsermedia(stream) {
var input = audio_context.createMediaStreamSource(stream);
console.log('Media stream created.');
recorder = audio_context.createScriptProcessor(2048);
recorder.onaudioprocess = recorderProcess;
recorder.connect(audio_context.destination);
}
function recorderProcess(e) {
var buffer = e.inputBuffer.getChannelData(0);
var resampled = resampler.resampler(buffer);
//--> do sth with the resampled data for instance send to server
}
It looks like there is an open bug about the inability to set the sampling rate:
https://github.com/WebAudio/web-audio-api/issues/300
There's also a Chrome issue:
https://bugs.chromium.org/p/chromium/issues/detail?id=432248
I checked the latest Chromium code and there is nothing in there that lets you set the sampling rate.
Edit: Seems like it has been implemented in Chrome, but is broken currently - see the comments in the Chromium issue.
it's been added to chrome:
var ctx = new (window.AudioContext || window.webkitAudioContext)({ sampleRate:16000});
https://developer.mozilla.org/en-US/docs/Web/API/AudioContext/AudioContext
audioContext = new AudioContext({sampleRate: 48000})
Simply Set sample rate when created AudioContext object, This worked for me
NOTE: This answer is outdated.
You can't. The sample rate of the AudioContext is set by the browser/device and there is nothing you can do to change it. In fact, you will find that 44.1kHz on your machine might be 48kHz on mine. It varies to whatever the OS picks by default.
Also remember that not all hardware is capable of all sample rates.
You can use an OfflineAudioContext to essentially render your audio buffer to a different sample rate (but this is batch operation).
So you would record your recording using the normal audio context, and then use an OfflineAudioContext with a different sample rate to render your buffer. There is an example on the Mozilla page.
It is now in the spec but not yet implemented in Chromium.
Also in bugs.chromium.org, "Status: Available" does not mean it is implemented. It just means that nobody is working on it and that it is available for anyone who wants to work on it. So "Available" means "Not assigned".

Categories