I have an audio file with extension .raw and recorded in stereo, and I need to convert it to mono with Node. I'm not able to find a example of how to do it this process or a library.
Any help would be great
You can use the following library
https://github.com/fluent-ffmpeg/node-fluent-ffmpeg to use ffmpeg on node.
And following to convert to stereo to mono via ffmpeg
ffmpeg -i stereo.flac -ac 1 mono.flac
Just pass the above options to ffmpeg via the library.
Here's my code that seems to work
var FFmpeg = require('fluent-ffmpeg');
var command = FFmpeg({
source: 'test.webm'
}).addOption('-ac', 1)
.saveToFile('out.mp3');
Related
I am trying to build a real-time streaming app using websockets and ffmpeg.
I have 2 processes on the host machine for capturing video(screen) and audio(microphone) every second. Those 2 files (1 file for a second of audio and 1 file for a second of video) are combined into one mp4 file using ffmpeg.
ffmpeg -y -i video.mkv -i audio.wav -map 0:v -map 1:a -c:v libx264 -tune zerolatency -preset ultrafast -shortest -movflags frag_keyframe+empty_moov+default_base_moof -r 25 -g 50 -pass 1 -f mp4 -b:v 2M pipe:1
This file is sent to the client machine using a websocket.
On the web part I initialize a MediaSource object with a sourceBuffer.
videoSourceBuffer = mediaSource.addSourceBuffer('video/mp4; codecs="avc1.4D0033, mp4a.40.2"');
But I had a problem. Both audio and video were lagging behind. When I started the stream the delay was about 2 seconds. After 2 minutes the delay was 10 seconds.
I managed to see how many buffered frames I had in the sourceBuffer and speed up the stream to keep up with the source using this piece of code from How can I find remaing frames in a MediaSource SourceBuffer?.
let getBufferedLength = () => videoSourceBuffer.buffered.end(0) - videoElement.currentTime;
Now the video stream has always a delay about 1.5 seconds but the audio lags behind.
I cannot find a way to access how many audio frames are buffered or something so I can sync both the audio and video
I am struggling on a ffmpeg media conversion script.
I am using fluent-ffmpeg library with node.js.
My app is supposed to receive a stream as input, resize it using ffmpeg, and then outputing a stream.
However, I am absolutely unable to process an input stream with ffmpeg, even when specifying the input format (-f ffmpeg's option).
However, when execuyting the exact same ffmpeg command on an mp4 file (without extension), it works and converts properly the media !
Working code (no stream)
import * as ffmpeg from 'fluent-ffmpeg';
ffmpeg('myMp4File')
.inputFormat('mp4')
.audioCodec('aac')
.videoCodec('libx264')
.format('avi')
.size('960x540')
.save('mySmallAviFile');
Failing code (using stream)
import * as ffmpeg from 'fluent-ffmpeg';
import { createReadStream } from 'fs';
ffmpeg(createReadStream('myMp4File'))
.inputFormat('mp4')
.audioCodec('aac')
.videoCodec('libx264')
.format('avi')
.size('960x540')
.save('mySmallAviFile');
It generates the following ffmpeg's error:
Error: ffmpeg exited with code 1: pipe:0: Invalid data found when processing input
Cannot determine format of input stream 0:0 after EOF
Error marking filters as finished
Conversion failed!
This error explicitely says that ffmpeg could not identify the input format, in despite of argument -f mp4.
I read pages and pages of ffmpeg's man but I could find any relevant information concerining my issue.
Complementary information
Here is the output of command._getArguments(), showing the full ffmpeg command baked by the library:
[
'-f', 'mp4',
'-i', 'pipe:0',
'-y', '-acodec',
'aac', '-vcodec',
'libx264', '-filter:v',
'scale=w=960:h=540', '-f',
'avi', 'mySmallAviFile'
]
So the full ffmpeg command is the following:
ffmpeg -f mp4 -i pipe:0 -y -acodec 'aac' -vcodec 'libx264 -filter:v 'scale=w=960:h=540' -f 'avi' mySmallAviFile
I was getting the same error but only for files which had moov atom metadata at the end of a file.
After moving mov atom to the beginning of the file with:
ffmpeg -i input.mp4 -movflags faststart out.mp4
the error dissapeared.
i want to upload video on azure BLOB server ,i've successfully uploaded it using javascript , but now i want to compress video size or blocks size during upload.
Is there any way to do that. Thanks in advance.
of fine ,then how to do it on server side?
I suggest you could use ffmpeg to compress your video file by using azure web jobs.
You could use below command in the webjob to compress your video file.
-i {inputfile} -vcodec h264 -b:v 250k -acodec mp2 {outputfile}
You could firstly upload the file to your azure web site root path.Then you could add the file name as queue message to the azure queue storage.
If the webjob find the file name, it could run the ffmpeg to compress the video file. After compressed completely, upload that file to blob storage and delete the uploaded file.
More details, you could refer to below steps.
1.Create some new folder to store the ffmpeg exe file and uploaded/compressed file.
2.Create a webjob project and install the mediatoolkit package from the Nuget.
3.Add below codes to the webjobs function.
public static void ProcessQueueMessage(
[QueueTrigger("blobcopyqueue")] string filename, TextWriter log,
[Blob("textblobs/{queueTrigger}", FileAccess.Write)] Stream blobOutput
)
{
//set the input file path
string inputfile = string.Format(#"D:\home\site\wwwroot\video\{0}", filename);
//set the input file path
string outputFile = string.Format(#"D:\home\site\wwwroot\video-compress\{0}", filename);
using (var engine = new Engine(#"D:\home\site\wwwroot\compress\ffmpeg.exe"))
{
string command = string.Format(#"-i {0} -vcodec h264 -b:v 250k -acodec mp2 {1}", inputfile, outputFile);
//you could change the command value as what you want to use
engine.CustomCommand(command);
}
using (var fileStream = System.IO.File.OpenRead(outputFile))
{
fileStream.CopyTo(blobOutput);
}
//after compress delete the file.
//File.Delete(inputfile);
// File.Delete(outputFile);
}
Result:
I'm trying to do gapless playback of segments generated using ffmpeg:
I use ffmpeg to encode 3 files from a source with exactly 240000 samples # 48kHz, i.e. 5 seconds.
ffmpeg -i tone.wav -af atrim=start_sample=24000*0:end_sample=240000*1 -c:a opus 0.webm
ffmpeg -i tone.wav -af atrim=start_sample=24000*1:end_sample=240000*2 -c:a opus 1.webm
ffmpeg -i tone.wav -af atrim=start_sample=24000*2:end_sample=240000*3 -c:a opus 2.webm
When looking at the meta data (using ffprobe and ffmpeg -loglevel debug) from the file I get the following which seems to me inconsistent values:
Duration: 5.01,
Start 0.007
discard 648/900 samples
240312 samples decoded
If I have several of these files how would I play them seamlessly without gaps?
i.e. in a browser I've tried:
sourceBuffer.timestampOffset = 5 * n - 648/48000;
sourceBuffer.appendWindowStart = 5 * n;
sourceBuffer.appendWindowEnd = 5 * (n+1);
sourceBuffer.appendBuffer(new Uint8Array(buffer[n]));
However, there are audible gaps.
How many samples am I actually supposed to discard? 0.007 * 48000, 648, or 240312 - 240000?
Here is a html page which can be opened in Chrome to test.
You need a simple http server to run it:
<< ls
>> index.html 0.webm 1.webm 2.webm
<< npm install -g http-server
<< http-server --cors
I understand emscripten is a super powerful way to encode C code into Javascript.
Is it possible to use this for video, capture the webcam and stream this over RTMP using something like the rtmpdump library?
rtmpdump can be recompiled to javascript using Emscripten. However, that does not guarantee that the recompiled code is capable of executing within a Javascript environment in the way that the RTMP spec requires (namely the requirement for TCP).
Steps used to recompile rtmpdump with Emscripten:
Obtain latest portable emscripten tools:
Obtain rtmpdump source:
git clone git://git.ffmpeg.org/rtmpdump
Clear make cache
make clean
Set C compiler to CC in Makefile
Edit the rtmpdump Makefile on line 5 to the following:
CC=$(CROSS_COMPILE)cc
Run emmake to create bytecode from make output:
emmake make CRYPTO=
(Per rtmpdump README, I opted to use 'CRYPTO=' to build without SSL support as it was giving errors)
Run emcc to compile and link resulting bytecode into javascript:
emcc -01 ./librtmp/*.o rtmpdump.o -o rtmpdump.js
Run the recompiled rtpmpdump.js:
chmod 755 rtmpdump.js
node rtmpdump.js -r rtmp://127.0.0.1/live/STREAM_NAME
Of course, we will need a live RTMP stream to test against.
Steps to create live RTMP stream:
Obtain latest node-rtsp-rtmp-server:
git clone
https://github.com/iizukanao/node-rtsp-rtmp-server.git
Add an mp4 to livestream over RTMP:
(Using Big Buck Bunny as our test video)
cd node-rtps-rtmp-server/
npm install -d
cd file/
wget http://download.bl4ender.org/peach/bigbuckbunny_movies/BigBuckBunny_320x180.mp4
Start the RTMP server
sudo coffee server.coffee
Publish mp4 to RTMP server with ffmpeg
ffmpeg -re -i /node-rtsp-rtmp-server/file/BigBuckBunny_320x180.mp4 -c:v copy -c:a copy -f flv rtmp://localhost/live/STREAM_NAME
Observations
You should be able to confirm that the RTMP stream is successfully published by connecting with something like VLC Media Player. Once we confirm the stream is properly running, we can test rtmpdump.js with:
node rtmpdump.js -4 rtmp://127.0.0.1/live/STREAM_NAME -o out.flv
However, we immediately encounter:
ERROR: RTMP_Connect0, failed to connect socket. 113 (Host is unreachable)
Conclusion
While my answer explores a path to recompiling rtmpdump and it's supporting libraries (librtmp) to Javascript, it does not produce a working implementation.
Some quick research concludes that RTMP relies on TCP communication for transmission from server to client. Javascript by nature, confines communication to XHR and WebSocket requests only. The steps I have outlined for recompilation of rtmpdump produces XHR requests for the RTMP_Connect0 method which are HTTP based (i.e. != TCP). It may be possible to rewrite an RTMP client to use websockets and pass those connections over to TCP using something like WebSockify, however, if successful you would move the RTMP's dependency on flash to a dependency on Websockify if you intend to consume an RTMP stream. Producing a flashless RTMP client does not appear to be a simple matter of recompiling RTMP to Javascript as the transport mechanism (TCP) must be accounted for.
Notes
For anyone looking to pick up on this work, be aware that testing against a remote stream from a browser running a theoretically proper rtmp implementation in Javascript would require that CORS is enabled on the remote host due to Same-Origin-Policy. See: https://github.com/Bilibili/flv.js/blob/master/docs/cors.md