Normalizing audio in javascript AND saving result to file - javascript

Ive been playing around with JS normalizer to bring widely different volume recordings to play at a fairly constant volume.
This code is doing it's job quite nicely.
My final goal though is to allow the user to record audio with the mic, then save the normalized audio to a file (I don't need to play it back, And I don't need the original recording once i have the normalized version of it).
Does anyone know how can this be accomplished?
BTW, since i first used this sample chrome now refuses to let the AudioContext object be set without user interaction, so a quick workaround is to move the declaration:
var audioCtx = new AudioContext();
to the beginning of normalizedAudioElement function
Cheers!

Related

Analyzing WAV file faster than playback speed

I need to implement a spectrogram of WAV files for a tool we're building. It needs to display the spectrogram for the entire file in one go (think Audacity).
Here's a jsfiddle of my starting point. This function only logs in 'playback time'. Is it possible to get the frequency data any other way?
var freqData = new Uint8Array(analyser.frequencyBinCount);
scp.onaudioprocess = function()
{
analyser.getByteFrequencyData(freqData);
console.log(freqData);
};
Use an OfflineAudioContext as the destination instead of an AudioContext. You probably also want to check that the buffer size for the ScriptProcessorNode makes sense for the FFT size of the AnalyserNode.
I'm not sure every browser allows the ScriptProcessorNode to work nicely with an OfflineAudioContext.

What is the need of MediaStream interface in WebRTC?

In WebRTC we have MediaStream and MediaStreamTrack interfaces.
MediaStreamTrack represents a audio or video stream of a media source. So a consumer like video or audio tag can simply take an MediaStreamTrack object and and get the stream from it. So what is the need for MediaStream interface?
According to official documentation, MediaStream synchronises one or more tracks. Does that mean it combines multiple streams from tracks and produces a single stream so that we have video data with audio?
For example: Does a video tag read the stream from MediaStream object or reads streams from the individual tracks?
This concept is not explained clearly anywhere.
Thanks in advance.
MediaStream has devolved into a simple container for tracks, representing video and audio together (a quite common occurrence).
It doesn't "combine" anything, it's merely a convenience keeping pieces together that need to be played back in time synchronization with each other. No-one likes lips getting out of sync with spoken words.
It's not even really technically needed, but it's a useful semantic in APIs for:
Getting output from hardware with camera and microphone (usually video and audio), and
Connecting it (the output) to a sinc, like the html video tag (which accepts video and audio).
Reconstituting audio and video at the far end of an RTCPeerConnection that belong together, in the sense that they should generally be played in sync with each other (browsers have more information about expectations on the far end this way, e.g. if packets from one track are lost but not the other).
Whether this is a useful abstraction may depend on the level of detail you're interested in. For instance the RTCPeerConnection API, which is still in Working Draft stage, has over the last year moved away from streams as inputs and outputs to dealing directly with tracks, because the working group believes that details matter more when it comes to transmission (things like tracking bandwidth use etc.)
In any case, going from one to the other will be trivial:
var tracks = stream.getTracks();
console.log(tracks.map(track => track.kind)); // audio,video
video.srcObject = new MediaStream(tracks);
once browsers implement the MediaStream constructor (slated for Firefox 44).

Web Audio- Chaining buffers that are being dynamically written

This is sort of expanding on my previous question Web Audio API- onended event scope, but I felt it was a separate enough issue to warrant a new thread.
I'm basically trying to do double buffering using the web audio API in order to get audio to play with low latency. The basic idea is we have 2 buffers. Each is written to while the other one plays, and they keep playing back and forth to form continuous audio.
The solution in the previous thread works well enough as long as the buffer size is large enough, but latency takes a bit of a hit, as the smallest buffer I ended up being able to use was about 4000 samples long, which at my chosen sample rate of 44.1k would be about 90ms of latency.
I understand that from the previous answer that the issue is in the use of the onended event, and it has been suggested that a ScriptProcessorNode might be of better use. However, it's my understanding that a ScriptProcessorNode has its own buffer of a certain size that is built-in which you access whenever the node receives audio and which you determine in the constructor:
var scriptNode = context.createScriptProcessor(4096, 2, 2); // buffer size, channels in, channels out
I had been using two alternating source buffers initially. Is there a way to access those from a ScriptProcessorNode, or do I need to change my approach?
No, there's no way to use other buffers in a scriptprocessor. Today, your best approach would be to use a scriptprocessor and write the samples directly into there.
Note that the way AudioBuffers work, you're not guaranteed in your previous approach to not be copying and creating new buffers anyway - you can't simultaneously be accessing a buffer from the audio thread and the main thread.
In the future, using an audio worker will be a bit better - it will avoid some of the thread-hopping - but if you're (e.g.) streaming buffers down from a network source, you won't be able to avoid copying. (It's not that expensively, actually.) If you're generating the audio buffer, you should generate it in the onaudioprocess.

Firefox Web Audio API on-the-fly update AudioBuffer / AudioBufferSourceNode

I'm creating a 1s audio snippet by programmatically filling a AudioBuffer. The AudioBufferSourceNode has looping enabled. It plays back just fine in Chrome and Firefox.
Now I want to dynamically update the AudioBuffer and have the new audio picked up immediately (or at the next loop). In Chrome this works perfectly by simply getting the channel data (getChannelData(0)) and writing to it. Chrome updates the playing audio on-the-fly. Firefox keeps playing the original buffer over and over again. In fact in Firefox the AudioBuffer needs to be written before assigning it to the AudioBufferSourceNode (source.buffer = buffer).
This should not be done this way. You're trying to update an object across a thread boundary.
Chrome has a bug where we don't currently implement the memory protection (i.e. you can update the contents of the AudioBuffer and it will change what the looping buffer sounds like). FF currently has a different bug, where it allows you to set the .buffer more than once. These should both get fixed.
To address this scenario, you need to loop each buffer until you get then next one, then cross-fade between them. It's unlikely just looping a 1s buffer is really what you want anyway? (unless it's noise.)
Simply assigning the same buffer again will update Firefox's internal state. So just do source.buffer = buffer again after updating the buffer. Even though it should be a NOOP, since it's the exact same reference.
Even source.buffer = source.buffer does the trick.

Streaming programmatically generated audio to browser

I'm trying write a web application that takes information from the user, generates audio on the server from that information, and then plays it back in the user's browser. I've been googling a whole bunch, and I'm kind of unsure exactly what it is that I need to do to get this to happen. What is it that programs like Icecast are doing "behind the scenes" to create these streams? I feel a little bit like I don't even know how to ask the right question or search as almost all the information I'm finding is either about serving files or assumes I know more than I do about how the server side of things works.
This question may help with how to generate music programmatically; it suggests several tools that are designed for this purpose: What's a good API for creating music via programming?
Icecast is a bit of a red herring - that is the tool to broadcast an audio stream, so really you'd be looking at feeding the output of whatever tool you use to generate your music into Icecast or something similar in order to broadcast it to the world at large. However, this is more for situations where you want a single stream to be broadcast to multiple users (e.g. internet radio). If you simply want to generate audio from user input and serve it back to that user, then this isn't necessary.
I'm aware this isn't a full answer, as the question is not fully formed, but I couldn't fit it all into a comment. Hopefully it should help others who stumble upon this question... I suspect the original question writer has moved on by now.
Just have look at Media source API( under implementation). this would be what you are required.
window.MediaSource = window.MediaSource || window.WebKitMediaSource;
var ms = new MediaSource();
var audio = document.querySelector('audio');
audio.src = window.URL.createObjectURL(ms);
ms.addEventListener('webkitsourceopen', function(e) {
...
var sourceBuffer = ms.addSourceBuffer('type; codecs="codecs"');
sourceBuffer.append(oneAudioChunk); //append chunks of data
....
}, false);

Categories