MIX Wav file and export it with Web Audio API - javascript

I'm a web developer from japan.
This is first question on stack over flow.
I'm creating a simple music Web application now.
making a music system program is a completely beginner, so I am struggling to implement it.
As a result of various investigations, I noticed that using the Web Audio API was the best choice,
so, I decided to use it.
▼ What I want to achieve
Multiple Wav files load with the Web audio API can be grouped into one Wav file &To be able to download from the browser.
For example, load the multiple wav file like guitar, drum and piano, and
edit it on the browser, and finally output it as one Wav file.
Then we can download that edited wav file from the browser and we are able to play itunes.
▼ Question
Is it possible to achieve this requirements by just using web audio api ?
or we need to use another Library ?
I checked Record.js on github but development has stopped about 2 ~ 3 years and has many issues and I can not get support. so I decided not to use it.
and also I checked similar issue Web audio API: scheduling sounds and exporting the mix
Since the information is old, I do not know if I can still use it
thanks.

Hi and welcome to Stack Overflow!
Is it possible to achieve this just using the web audio api?
In terms of merging/mixing the files together this is perfectly achievable! This article goes through many (if not all) of the steps you will need to carry out the task you suggested.
Each file you want to upload can be loaded into an AudioBufferSource (examples explained in that article linked before) Example setting up a buffer source once the audio data has been loaded in:
play: function (data, callback) {
// create audio node and play buffer
var me = this,
source = this.context.createBufferSource(),
gainNode = this.context.createGain();
if (!source.start) { source.start = source.noteOn; }
if (!source.stop) { source.stop = source.noteOff; }
source.connect(gainNode);
gainNode.connect(this.context.destination);
source.buffer = data;
source.loop = true;
source.startTime = this.context.currentTime; // important for later!
source.start(0);
return source;
}
There are then also specific nodes already designed for your mixing purposes like the ChannelMergerNode (combines multiple mono channels into a new channel buffer). This is if you don't want to deal with the signal processing yourself in javascript but will be faster using the Web Audio objects since they are native compiled code already within the browser.
Following that complete guide sent before, there are also options to export the file (as a .wav in the demo case) using the following code :
var rate = 22050;
function exportWAV(type, before, after){
if (!before) { before = 0; }
if (!after) { after = 0; }
var channel = 0,
buffers = [];
for (channel = 0; channel < numChannels; channel++){
buffers.push(mergeBuffers(recBuffers[channel], recLength));
}
var i = 0,
offset = 0,
newbuffers = [];
for (channel = 0; channel < numChannels; channel += 1) {
offset = 0;
newbuffers[channel] = new Float32Array(before + recLength + after);
if (before > 0) {
for (i = 0; i < before; i += 1) {
newbuffers[channel].set([0], offset);
offset += 1;
}
}
newbuffers[channel].set(buffers[channel], offset);
offset += buffers[channel].length;
if (after > 0) {
for (i = 0; i < after; i += 1) {
newbuffers[channel].set([0], offset);
offset += 1;
}
}
}
if (numChannels === 2){
var interleaved = interleave(newbuffers[0], newbuffers[1]);
} else {
var interleaved = newbuffers[0];
}
var downsampledBuffer = downsampleBuffer(interleaved, rate);
var dataview = encodeWAV(downsampledBuffer, rate);
var audioBlob = new Blob([dataview], { type: type });
this.postMessage(audioBlob);
}
So I think Web-Audio has everything you could want for this purpose! However could be challenging depending on your web development experience, but its a skill definately worth learning!
Do we need to use another library?
If you can I think it's definately worth trying it with Web-Audio, as you'll almost definately get the best speeds for processing, but there are other libraries such as Pizzicato.js just to name one. I'm sure you will find plenty others.

Related

Playback when using Web Audio API skips at the beginning of every chunk

I've been building a music app and today I finally got around to the point where I started trying to work playing the music into it.
As an outline of how my environment is set up, I am storing the music files as MP3s which I have uploaded into a MongoDB database using GridFS. I then use a socket.io server to download the chunks from the MongoDB database and send them as individual emits to the front end where the are processed by the Web Audio API and scheduled to play.
When they play, they are all in the correct order but there is this very tiny glitch or skip at the same spots every time (presumably between chunks) that I can't seem to get rid of. As far as I can tell, they are all scheduled right up next to each other so I can't find a reason why there should be any sort of gap or overlap between them. Any help would be appreciated. Here's the code:
Socket Route
socket.on('stream-audio', () => {
db.client.db("dev").collection('music.files').findOne({"metadata.songId": "3"}).then((result) =>{
const bucket = new GridFSBucket(db.client.db("dev"), {
bucketName: "music"
});
bucket.openDownloadStream(result._id).on('data',(chunk) => {
socket.emit('audio-chunk',chunk)
});
});
});
Front end
//These variable are declared as object variables, hence all of the "this" keywords
context: new (window.AudioContext || window.webkitAudioContext)(),
freeTime: null,
numChunks: 0,
chunkTracker: [],
...
this.socket.on('audio-chunk', (chunk) => {
//Keeping track of chunk decoding status so that they don't get scheduled out of order
const chunkId = this.numChunks
this.chunkTracker.push({
id: chunkId,
complete: false,
});
this.numChunks += 1;
//Callback to the decodeAudioData function
const decodeCallback = (buffer) => {
var shouldExecute = false;
const trackIndex = this.chunkTracker.map((e) => e.id).indexOf(chunkId);
//Checking if either it's the first chunk or the previous chunk has completed
if(trackIndex !== 0){
const prevChunk = this.chunkTracker.filter((e) => e.id === (chunkId-1))
if (prevChunk[0].complete) {
shouldExecute = true;
}
} else {
shouldExecute = true;
}
//THIS IS THE ACTUAL WEB AUDIO API STUFF
if (shouldExecute) {
if (this.freeTime === null) {
this.freeTime = this.context.currentTime
}
const source = this.context.createBufferSource();
source.buffer = buffer
source.connect(this.context.destination)
if (this.context.currentTime >= this.freeTime){
source.start()
this.freeTime = this.context.currentTime + buffer.duration
} else {
source.start(this.freeTime)
this.freeTime += buffer.duration
}
//Update the tracker of the chunks that this one is complete
this.chunkTracker[trackIndex] = {id: chunkId, complete: true}
} else {
//If the previous chunk hasn't processed yet, check again in 50ms
setTimeout((passBuffer) => {
decodeCallback(passBuffer)
},50,buffer);
}
}
decodeCallback.bind(this);
this.context.decodeAudioData(chunk,decodeCallback);
});
Any help would be appreciated, thanks!
As an outline of how my environment is set up, I am storing the music files as MP3s which I have uploaded into a MongoDB database using GridFS.
You can do this if you want, but these days we have tools like Minio, which can make this easier using more common APIs.
I then use a socket.io server to download the chunks from the MongoDB database and send them as individual emits to the front end
Don't go this route. There's no reason for the overhead of web sockets, or Socket.IO. A normal HTTP request would be fine.
where the are processed by the Web Audio API and scheduled to play.
You can't stream this way. The Web Audio API doesn't support useful streaming, unless you happened to have raw PCM chunks, which you don't.
As far as I can tell, they are all scheduled right up next to each other so I can't find a reason why there should be any sort of gap or overlap between them.
Lossy codecs aren't going to give you sample-accurate output. Especially with MP3, if you give it some arbitrary number of samples, you're going to end up with at least one full MP3 frame (~576 samples) output. The reality is that you need data ahead of the first audio frame for it to work properly. If you want to decode a stream, you need a stream to start with. You can't independently decode MP3 this way.
Fortunately, the solution also simplifies what you're doing. Simply return an HTTP stream from your server, and use an HTML audio element <audio> or new Audio(url). The browser will handle all the buffering. Just make sure your server handles range requests, and you're good to go.

How to get right amplitudes (without playing) of the audiotrack samples with web audio api

I did some experiments with the samples volume data, provided to me by web audio api and it appeares that they are different from the data got from other programs like audasity for example. It shows the duration of audio longer at about 40 seconds if we take divide samples array length to sample rate (leftChannel.length/44100 in my case), and also where audacity shows loud samples in audio, my script shows quiet (almost silent sometimes). And I played pieces reported by my script as quiet in audacity and there is definitely loud sound.
So the question is: is my way to find ampitudes of audio samples correct? And what is wrong with my code?
I've read the thread about creating waveform for audiotrack Create a waveform of the full track with Web Audio API
and there is good example of what I used, so here is my code:
(function($) {
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new window.AudioContext(); // Create audio container
var audioData;
var checkPoint = 21; //this is point in seconds just for debugging, remove later!!
function decode(audioData) {
try{
context.decodeAudioData(audioData,
function(decoded){
drawSound(decoded);
playAudio(decoded);
},
function(){ // do nothing here
});
} catch(e) {
console.log('decode exception',e.message);
}
}
function drawSound(buffer) {
var leftChannel = buffer.getChannelData(0);
console.log('audio duration: ' + leftChannel.length/44100 + ' sec');
for (var i = 0; i < leftChannel.length; i++) {
var volume = leftChannel[i];
//draw(volume, i);
//this is just for debugging, remove later!!
if (Math.abs(i - checkPoint * 44100) < 100) {
console.log(leftChannel[i]);
}
}
}
function playAudio(buffer) {
console.log('start playing audio');
var audioTrack = context.createBufferSource();
audioTrack.connect(context.destination);
audioTrack.buffer = buffer;
audioTrack.start();
}
$(document).ready(function() {
$('body').append($('<input type="text" id="checkpoint" />'));
$('#checkpoint').change(function() {
checkPoint = $(this).val();
});
$('body').append($('<input type="file" id="audiofile" />'));
$('input[type="file"]').change(function() {
var reader = new FileReader();
reader.onload = function(e) {
audioData = this.result;
decode(audioData);
};
reader.readAsArrayBuffer(this.files[0]);
});
});
})(jQuery);
<html>
<head>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
</head>
<body>
</body>
</html>
If you think this is a bug in Chrome, please file a bug at crbug.com/new. And provide one audiotrack that produces different results and details about the different machines.
Note also that in chrome, ffmpeg is used to decode the files. The decoded file could differ in length from audacity.
I've made more thorough measurements and more testing and found that there is no error in my script, BUT!! web audio api gives the results equal to audasity both for the length of audiotrack and for the amplitudes on one of my computers (both in Chrome and Firefox) and wrong results on another computer (again browser independently), so I conclude that web audio api uses some low level processor dependent features because I have the same OS (windows 7) on both my computers, the only difference is hardware.
The code, provided in my post is working as expected.
Thanks to Raymond Toy I've managed to solve this issue. It appeares that result returned by buffer.getChannelData(0) depends from computer audiocard's sample rate (which you can find from context.sampleRate). And if you want to find the length of track you should do it like this:
var samples = buffer.getChannelData(0);
var duration = samples/context.sampleRate;
And offset of sample from start in seconds is:
var sample = samples[i];
var offsetInSeconds = i/context.sampleRate
My mistake was that I used mp3 audiotrack's sample rate in formulas and this caused errors on computer with different audiocard sample rate.
As Raymond Toy noticed, if you need only duration of audio you can get it even simpler like this:
var duration = buffer.duration;

Steaming a growing file using the MediaSource API.

So I have a downloading .mp4 file. I would like to stream the download file into a video element using the MediaSource API. How would I do this?
const NUM_CHUNKS = 5;
var video = document.querySelector('video');
video.src = video.webkitMediaSourceURL;
video.addEventListener('webkitsourceopen', function(e) {
var chunkSize = Math.ceil(file.size / NUM_CHUNKS);
// Slice the video into NUM_CHUNKS and append each to the media element.
for (var i = 0; i < NUM_CHUNKS; ++i) {
var startByte = chunkSize * i;
// file is a video file.
var chunk = file.slice(startByte, startByte + chunkSize);
var reader = new FileReader();
reader.onload = (function(idx) {
return function(e) {
video.webkitSourceAppend(new Uint8Array(e.target.result));
logger.log('appending chunk:' + idx);
if (idx == NUM_CHUNKS - 1) {
video.webkitSourceEndOfStream(HTMLMediaElement.EOS_NO_ERROR);
}
};
})(i);
reader.readAsArrayBuffer(chunk);
}
}, false);
How would I dynamically change the NUM_CHUNKS,and slice the video?
The code you're using from Eric Bidelman chops up a video that the browser already fully downloaded to demonstrate how the api works. In reality, you'd slice the video on the server, and the client would download each chunk in order, probably with an AJAX request.
I'd first suggest you try your .mp4 in the demo code you have, because MediaSource seems pretty picky about the format of the video files it accepts. See Steven Robertson's answer about how to create an mp4 that'll work.
Then it's up to you whether you want to slice the video manually beforehand, or do it dynamically on the server (which will vary depending on your server). The javascript client shouldn't care how many or how large each chunk each is, as long as they're fed in order (and I think the spec even allows some amount of out-of-order appending).
webkitMediaSourceURL; is now outdated in Chrome, and now createObjectURL(); needs to be used.
The patch here: HTMLMediaElement to the new OO MediaSource API gave me some pointers as to what I needed to update in my code.

Caching a background in Windows Metro App

I'm working on a WinJS Windows Metro application and on one of my pages I'm getting a URL to an image to display as a background. I can get that working just fine by using url(the URL of the image) and setting that as the style.backgroundImage.
I need to use that same image on a linked page, but that means I have to make another HTTP request, which I'm trying to avoid. I looked into alternatives and found LocalFolder as an option. The only issue is I don't know how to access the file and set it as a background.
Is that the right way to go about caching data to reduce webcalls?
Here's the code I'm using:
function saveBackground(url) {
localFolder.createFileAsync("background.jpg", Windows.Storage.CreationCollisionOption.replaceExisting).then(function (newFile) {
var uri = Windows.Foundation.Uri(url);
var downloader = new Windows.Networking.BackgroundTransfer.BackgroundDownloader();
var promise = downloader.createDownload(uri, newFile);
promise.startAsync().then(function () {
//set background here.
var wrapper = document.getElementById("wrapper").style;
localFolder.getFileAsync("background.jpg").then(function (image) {
console.log(image.path);
var path = image.path.split("");
var newLocation = [];
//This is just to make the backslashes work out for the url()
for (var i = 0; i < path.length; i++) {
if (path[i] != '\\') {
newLocation.push(path[i]);
} else {
newLocation.push('\\\\');
}
}
console.log(newLocation);
var newPath = newLocation.join("");
var target = "url(" + newPath + ")";
wrapper.backgroundImage = target;
console.log(wrapper.backgroundImage);
wrapper.backgroundSize = "cover";
});
});
});
}
It depends on which kind of image you want to transfer and how many of these. If there is only one image and not an heavy one (<5Mo approximately) I suggest you to use WinJS.xhr which allows you to download datas and more important it downloads the data as soon as its called.
The BackgroundTransfer should be used for big datas such as videos, musics, large images.
Concerning the caching of your image yes you can do it of course with the local folder (and you should do it this way).
You should take a look to this series of article made by David Catuhe which are really great
http://blogs.msdn.com/b/eternalcoding/archive/2012/06/15/how-to-cook-a-complete-windows-8-application-with-html5-css3-and-javascript-in-a-week-day-0.aspx
Hope this help.

Why am I running out of memory when downloading multiple files in appcelerator mobile on Android

I have asked this question also on the appcelerator forum, but as I find I often get better answers from you lovely people here on stackoverflow I am also asking it here just incase anyone can spread some light.
I have created a downloadQueue of urls and am using it to download files with the httpclient. Each file in the downloadQueue is is sent the the httpclient one at a time, with the next download being initiated only after the previous has been completed.
When I start the download, it seems to be working correctly and manages to download several files before it it simply freezes and I get an "out of memory" error in the DDMS error log.
I tried implementing suggestions found in other posts a sample of which are:
[http://developer.appcelerator.com/question/28911/httpclient-leaks-easily-or-can-we-have-a-close-method#answer-104241][1]
[http://developer.appcelerator.com/question/35041/large-file-download-on-mobile][2]
[http://developer.appcelerator.com/question/120129/httpclient-and-setfile][3]
[http://developer.appcelerator.com/question/95521/httpclient---save-response-directly-to-file][4]
I tried all of the following:
- moving larger file downloads directly form the nativepath rather then simply saving to file in order to insure that tmp files are not kept longer then necessary.
using the undocument setFile method of the httpclient. (This stopped my code dead without any error message, and as it is undocumented I have no idea if it was ever implemented on android anyway)
-using a settimeout in httplient.onload after the file has been download to pause for 1 second before requesting the next file (I have no idea how this would help, but I am clutching a straws now)
Below is the relevant parts of my code (which is complete except for the GetFileUrls functions which I excluded for simplicity sake as all this function does is return an array of URLs).
Can anyone spot anything that might be causing my memory issue. Does anyone have any ideas as I have tried everthing I can think of? (HELP!)
var count = 0;
var downloadQueue = [];
var rootDir = Ti.Filesystem.getExternalStorageDirectory();
downloadQueue = GetFileUrls(); /* this function is not included in order to keep my post as short as possible, bu it returns an array of urls */
DownloadFile(downloadQueue[count]);
var downloader = Ti.Network.createHTTPClient({timeout:10000});
downloader.onerror = function(){
Ti.API.info(this.responseData);
}
downloader.onload = function(){
SaveFile(this.folderName, this.fileName, this.responseData);
count += 1;
setTimeout( function(){ DownloadFile(); }, 1000);
}
function DownloadFile(){
if (count < downloadQueue.length){
var fileUrl = downloadQueue[count];
var fileName = fileUrl.substring(fileUrl.lastIndexOf('/') + 1);
downloader.fileName = fileName;
downloader.folderName = rootDir;
downloader.open('GET', fileUrl);
downloader.send();
}
}
function SaveFile(foldername, filename, response){
if (response.type == 1){
var f = Ti.Filesystem.getFile(response.nativePath);
var dest = Ti.Filesystem.getFile(foldername, filename);
if (dest.exists()){
dest.deleteFile();
}
f.move(dest.nativePath);
}else{
var dest = Ti.Filesystem.getFile(foldername, filename);
dest.write(response);
}
}
try to use events instead of the nested recursion that you are using. Android does not seem to like that too much

Categories