How to save an audio stream as a wav file using NodeJS - javascript

My aim is to record a radio stream coming from an Icecast server.
I am using the icecast node module to fetch the radio stream and then writing the wave file by piping the stream through the wav module.
Here is an example of my code:
const icecast = require('icecast');
const url = 'http://87.118.104.139/radiogibsonaac';
var wav = require('wav');
let ice, fileWriter;
ice = icecast.get(url, res => {
fileWriter = new wav.FileWriter(__dirname+'/recording.wav', {
channels: 1,
sampleRate: 16000,
bitDepth: 128
});
res.pipe(fileWriter);
});
setTimeout(()=>{
fileWriter.end();
ice.end();
},5000);
The stream is successully recorded to my disk as expected and I am able to listen to the file in VLC, but the wav file itself does not seem to be formed correctly.
When I try to use another tool to edit the file, it shows an error each time.
For example, I am trying to change the speed of the audio on this site and it does not recognise the file.
Also if I try to view the file info using the Sox CLI it displays:
sox FAIL formats: can't open input file `recording.wav': Sorry, don't understand .wav size
Does anybody know if I am missing a step in the process of writing the wav file to disk?

Based on the stream URL, it looks like the stream is in AAC format, and you are trying to write that data directly to a WAV file, so you end up with a file with a WAV header but AAC audio data.
You would need to either write the stream to disk as AAC, and then do a conversion on the file, or transcode the stream on-the-fly before writing it to disk.

Related

Is it possible to create an audio file based on CSV-data and an existing audio file?

I am working on a project in JavaScript, and I need to do a fairly strange task. I am not sure how to achieve it. I have looked into the most popular libraries for audio, but it doesn't seem to be an easy way to just export a created file fast, without recording it in real-time. I might be wrong.
I am going to have some data as JSON or in CSV format with numbers in each row. That number corresponds to seconds elapsed. This data tells me when a certain audio file needs to be played. The audio file is just a 5-second long clip of beeps.
I need to generate a long audio file that is just silent, but where the audio clip plays when "instructed" by the data. So it could play at 10 seconds, 45 seconds, 267 seconds, etc.
The finished audio file could be over an hour long. I was hoping to create a system where I could just select an audio file and a data file from my computer, click a button, let it process, and then download the finished file.
I hope what I want to do is not unclear. Can I use the Web Audio API for this, or do I need a library? I am stuck at the first part of the process, namely what to use and how to "create" a file from nothing.
I think you could try MediaRecorder from MediaStream Recording API - it will let you create a recording function (something like this):
const stream = navigator.mediaDevices.getUserMedia({ audio: true });
const mediaRecorder = new MediaRecorder(stream);
// then you can get your number from JSON / CSV file and do next
mediaRecorder.start();
await sleep(YOUR_NUMBER_FROM_FILE);
mediaRecorder.stop();
The example above just creates audio that will be last concrete seconds.
You can load your existing files of audio and try it play in the background. I suppose MediaRecorder will record them too.
See MDN Web Docs about MediaRecorder
To summarize - Web Audio API and MediaStream Recording API will be enough for your case.
I hope this will help you in some ways.

Load large video file in HTML

Here is my problem : I want to play a large video file (3.6Gb) stored in a S3 bucket, but it seems the file is too big and the page crash after 30sec of loading.
This is my code to play the video :
var video = document.getElementById("video");
const mediaSource = new MediaSource();
video.src = URL.createObjectURL(mediaSource);
mediaSource.addEventListener('sourceopen', sourceOpen, { once: true });
function sourceOpen() {
URL.revokeObjectURL(video.src);
const sourceBuffer = mediaSource.addSourceBuffer('video/mp4; codecs="avc1.f40028"');
fetch('URL_TO_VIDEO_IN_S3')
.then(response => response.arrayBuffer())
.then(data => {
// Append the data into the new sourceBuffer.
sourceBuffer.appendBuffer(data);
})
.catch(error => {
});
}
I saw that blob URL could be a solution but it didn't work well with my URL.
Take my answer with a grain of salt as I am no expert. However, I am working on something very similar at the moment.
I suspect your issue is that you're attempting to load the entire resource (video file) into the browser at once. An object URL for a file size that exceeds a gigabyte is extremely large.
What you need to do is use the readable stream from the body of your fetch request to process the video file chunk-by-chunk. So as long as you aren't confined to working in the safari browser, you should be to use both the Readable and Writeable Stream classes natively in the browser.
These two classes allow you to form what's called a pipe. In this case, you are "piping" data from the readable stream in your fetch request to a writable stream that you create which is then used as the underlying source of data for your media source extension and it's respective source buffers.
A stream pipe is very special in that it exhibits what's called backpressure. You should definitely look this term up, and read about what it means. In this case, it means the browser will not request more data once it has enough to meet its needs for video playback, the exact amount it can hold at once is specified by you the programmer through something called a "highwater mark" (you should also read about this).
This allows you to control when and how much data the browser is requesting from your (on going) fetch request.
NOTE: When you use .then(response => response.arrayBuffer()) you are telling the browser to wait for the entire resource to come back and then turn the response into an array buffer.
OPTION 1
Use CloudFront to create RTMP distribution to these resources.
It will distribute your video in streaming way.
Create an RTMP distribution to speed up distribution of your streaming media files using Adobe Flash Media Server's RTMP protocol.
Please note that HTML5 does not support RTMP format by default (without flash).
Check here for options
JWPlayer supports RTMP playback using flash. SO Question
---
OPTION 2
Use Elastic Transcoder to create HLS video (.m3u8 format). Again same JWPlayer can handle it in ease.
Also it's mostly supported in native HTML5. Check compatibility with H.264

Play local video file in electron html5 video player using node.js fs.readStream()

I am developing a video player application, which plays video videos (.mp4) from the local filesystem using node.js and electron (therefore I am using chromium's html5 video player).
Playing video videos larger than 2GB seems to be a problem with my current approach.
I used to read the local video files using fs.readFileSync and pass that Blob to the video player, like in this code:
this.videoNode = document.querySelector('video');
const file: Buffer = fs.readFileSync(video.previewFilePath);
this.fileURL = URL.createObjectURL(new Blob([file]));
this.videoNode.src = this.fileURL;
This does work for video files smaller that 2GB. Files larger than 2GB trigger the following error:
ERROR RangeError [ERR_FS_FILE_TOO_LARGE]: File size (2164262704) is greater than possible Buffer: 2147483647 bytes
at tryCreateBuffer (fs.js:328)
at Object.readFileSync (fs.js:364)
at Object.fs.readFileSync (electron/js2c/asar.js:597)
I believe the solution is to pass a ReadStream to the html5 video player using fs.readStream(). Unfortunately I cannot find any documentation on how to pass this stream to the video player.
As the topic says you are using electron and from the above comments it is clear that you are avoiding a server. It seems that if you are just creating an offline video player then you are just making things complex. Why are you creating a buffer and then creating a new url? You can achieve this by simply getting the video path and using it as src attribute of video object.
Your code should look like this-
var path="path/to/video.mp4"; //you can get it by simple input tag with type=file or using electron dialogs
this.videoNode = document.querySelector('video');//it should be a video element in your html
this.videoNode.src=path;
this.videoNode.oncanplay=()=>{
//do something...
}
This will handle complete file and you dont need to disable webPreference given that videoNode is the video element in html file.
You can take a look at this open source media player project made using electron-
https://github.com/HemantKumar01/ElectronMediaPlayer
Disclaimer: i am the owner of the above project and everyone is invited to contribute to it

WAV file from user microphone vs. WAV file from file: Some difference is causing bugs, but what are these different?

Right now I have two methods of sending a WAV file to the server. A user can directly upload said file, or make a recording on their microphone. Once the files are sent, they are processed in nigh the same way. The file is sent to S3, and can later be played by clicking on some link (which plays the file via audio = new Audio('https://S3.url'); audio.play()
When dealing with a file from the microphone:
audio.play() seems to work. Everything in the audio object is identical (except for the URL itself), but the sound won't actually play through the speakers. On the other hand, for an uploaded file, the sound plays through the speakers.
When I visit the URLs directly—both of them open up the sound-player (in Chrome) or prompt a download for a WAV file (in Firefox). The sound-player appropriately plays both sounds, and the downloaded WAV files each contain their respective sound, which other programs can play
If I actually download the file with sound from the user's microphone instead of sending it directly to the server, then manually upload the WAV file, everything works as it should (as it would with any other uploaded WAV file).
In any scenario where the microphone-sound is uploaded somewhere, then downloaded, it is downloaded as a WAV file and plays accordingly. Anything which uses the re-uploaded WAV file works as intended.
Here's how I'm getting the sound from the user's microphone. First, I use WebAudioTrack to place a record button on my webpage. Once the user stops their recording, they hit the submit button which runs:
saveRecButton.addEventListener("click", function() {
save_recording(audioTrack.audioData.getChannelData(0))
});
Here, audioTrack.audioData is an AudioBuffer containing the recorded sound. getChannelData(0) is a Float32Array representing the sound. I send this array to the server (Django) via AJAX:
function save_recording(channelData){
var uploadFormData = new FormData();
uploadFormData.append('data', $('#some_field').val());
...
uploadFormData.append('audio', channelData);
$.ajax({
'method': 'POST',
'url': '/soundtests/save_recording/',
'data': uploadFormData,
'cache': false,
'contentType': false,
'processData': false,
success: function(dataReturned) {
if (dataReturned != "success") {
[- Do Some Stuff -]
}
});
}
Then, using wavio, a WAV file is written from an array:
import wavio
import tempfile
from numpy import array
def save_recording(request):
if request.is_ajax() and request.method == 'POST':
form = SoundForm(request.POST)
if form.is_valid():
with tempfile.NamedTemporaryFile() as sound_recording:
sound_array_string = request.POST.get('audio')
sound_array = array([float(x) for x in sound_array_string.split(',')])
wavio.write(sound_recording, sound_array, 48000, sampwidth=4)
sound_recording.seek(0)
s3_bucket.put_object(Key=some_key, Body=sound_recording, ContentType='audio/x-wav')
return HttpResponse('success')
Then, when the sound needs to be listened to:
In Python:
import boto3
session = boto3.Session(aws_access_key_id='key', aws_secret_access_key='s_key')
bucket = self.session.resource('s3').Bucket(name='bucket_name')
url = session.client('s3').generate_presigned_url('get_object', Params={'Bucket':bucket.name, Key:'appropriate_sound_key'})
Then, in JavaScript:
audio = new Audio('url_given_by_above_python')
audio.play()
The audio plays well if I upload a file, but doesn't play at all if I use the user's microphone. Is there something about WAV files I might be missing that's done when I upload the microphone sound to S3, then re-download it? I have no clue where to go next; everything between the two files seems identical. Here's a dump of two Audio objects with URLs from the user's mic. and another created from a file manually uploaded that was re-downloaded from that exact user-mic. file look exactly the same (except for the URL, which, upon visiting or downloading, plays both sounds).
There's got to be some difference here, but I have no idea what it is, and have been struggling with this for a few days now. :(
The sound file you're creating is 32-bit PCM, which is an arguably non-standard audio codec. Chrome supports it (source) but Firefox does not (source, bug).
Encode it as 16-bit PCM and it'll be universally acceptable.
EDIT: As mentioned in the comments, this is the parameter in question.

flash voice recording, then upload to server

I'm looking for a flash widget that allows users to record their audio and then send it to the server.
There are a few similar questions:
Record Audio and Upload as Wav or MP3 to server
They advocate using Red5 or flash media server.
Shouldn't it be possible to record locally on the user's client using the codecs that the user already has and then upload the resulting file to the server, rather than say, process the and record the stream on the server itself.
Thanks.
According to the the Capturing Sound Input Article if you are running Flash Player 10.1 you can save the microphone data to a ByteArray. The Capturing microphone sound data section gives the following example on how to do it:
var mic:Microphone = Microphone.getMicrophone();
mic.setSilenceLevel(0, DELAY_LENGTH);
mic.addEventListener(SampleDataEvent.SAMPLE_DATA, micSampleDataHandler);
function micSampleDataHandler(event:SampleDataEvent):void {
while(event.data.bytesAvailable) {
var sample:Number = event.data.readFloat();
soundBytes.writeFloat(sample);
}
}
Once you have the ByteArray you can of course do whatever you want with it.
Once you have the ByteArray you can pass it in with NetStream.appendBytes()

Categories