I am struggling on a ffmpeg media conversion script.
I am using fluent-ffmpeg library with node.js.
My app is supposed to receive a stream as input, resize it using ffmpeg, and then outputing a stream.
However, I am absolutely unable to process an input stream with ffmpeg, even when specifying the input format (-f ffmpeg's option).
However, when execuyting the exact same ffmpeg command on an mp4 file (without extension), it works and converts properly the media !
Working code (no stream)
import * as ffmpeg from 'fluent-ffmpeg';
ffmpeg('myMp4File')
.inputFormat('mp4')
.audioCodec('aac')
.videoCodec('libx264')
.format('avi')
.size('960x540')
.save('mySmallAviFile');
Failing code (using stream)
import * as ffmpeg from 'fluent-ffmpeg';
import { createReadStream } from 'fs';
ffmpeg(createReadStream('myMp4File'))
.inputFormat('mp4')
.audioCodec('aac')
.videoCodec('libx264')
.format('avi')
.size('960x540')
.save('mySmallAviFile');
It generates the following ffmpeg's error:
Error: ffmpeg exited with code 1: pipe:0: Invalid data found when processing input
Cannot determine format of input stream 0:0 after EOF
Error marking filters as finished
Conversion failed!
This error explicitely says that ffmpeg could not identify the input format, in despite of argument -f mp4.
I read pages and pages of ffmpeg's man but I could find any relevant information concerining my issue.
Complementary information
Here is the output of command._getArguments(), showing the full ffmpeg command baked by the library:
[
'-f', 'mp4',
'-i', 'pipe:0',
'-y', '-acodec',
'aac', '-vcodec',
'libx264', '-filter:v',
'scale=w=960:h=540', '-f',
'avi', 'mySmallAviFile'
]
So the full ffmpeg command is the following:
ffmpeg -f mp4 -i pipe:0 -y -acodec 'aac' -vcodec 'libx264 -filter:v 'scale=w=960:h=540' -f 'avi' mySmallAviFile
I was getting the same error but only for files which had moov atom metadata at the end of a file.
After moving mov atom to the beginning of the file with:
ffmpeg -i input.mp4 -movflags faststart out.mp4
the error dissapeared.
Related
I used the below code to convert a pcap file to text file with the given columns. The output text file. Code doesn't give any error and it gives output but it gives it empty without any data. can you help please to find the error
from scapy.all import *
data = "Emotet-infection-with-Gootkit.pcap"
a = rdpcap(data)
os.system("tshark -T fields -e _ws.col.Info -e http -e frame.time -e "
"data.data -w Emotet-infection-with-Gootkit.pcap > Emotet-infection-with-Gootkit.txt -c 1000")
os.system("tshark -r Emotet-infection-with-Gootkit.pcap -Y http -w Emotet-infection-with-Gootkit.pcap")
sessions = a.sessions()
i = 1
for session in sessions:
http_payload = ""
tshark -F k12text -r a.pcap -w a.txt.
"K12 text format" is a text packet capture format; it's what some Tektronix equipment can write out - in that sense, it's similar to writing out the raw hex data, plus some metadata. However, from a user-interface sense, it's more like "Save As..." in Wireshark, because it's a capture file format.
I guess this should work for you
The two tshark commands you're running are:
tshark -T fields -e _ws.col.Info -e http -e frame.time -e data.data -w Emotet-infection-with-Gootkit.pcap > Emotet-infection-with-Gootkit.txt -c 1000
That command will do a live capture from the default interface, write 1000 captured packets to a capture file named Emotet-infection-with-Gootkit.pcap (probably pcapng, not pcap, as pcapng has been Wireshark's default capture format for many years now), and write the Info column, the word "http" for HTTP packets and nothing for non-HTTP packets, the time stamp, and any otherwise not dissected data out to Emotet-infection-with-Gootkit.txt as text.
tshark -r Emotet-infection-with-Gootkit.pcap -Y http -w Emotet-infection-with-Gootkit.pcap
That will read the Emotet-infection-with-Gootkit.pcap capture file and write out the HTTP packets in it to the same file. This is not recommended, because if you're reading from a file and writing to the same file in a given TShark command, you will write to the file as you're reading from it. This may or may not work.
If you want to extract the HTTP packets to a file, I'd suggest writing to a different file, e.g.:
tshark -r Emotet-infection-with-Gootkit.pcap -Y http -w Emotet-infection-with-Gootkit-http-packets.pcap
So your first command does a live capture, writes the packets to a pcapng file, and writes the fields in question to a text file, and your second command overwrites the capture file you wrote in the first command.
The first command should product text - are you saying that the Emotet-infection-with-Gootkit.txt file is empty?
I have an audio file with extension .raw and recorded in stereo, and I need to convert it to mono with Node. I'm not able to find a example of how to do it this process or a library.
Any help would be great
You can use the following library
https://github.com/fluent-ffmpeg/node-fluent-ffmpeg to use ffmpeg on node.
And following to convert to stereo to mono via ffmpeg
ffmpeg -i stereo.flac -ac 1 mono.flac
Just pass the above options to ffmpeg via the library.
Here's my code that seems to work
var FFmpeg = require('fluent-ffmpeg');
var command = FFmpeg({
source: 'test.webm'
}).addOption('-ac', 1)
.saveToFile('out.mp3');
Let me Explain by my Code what issue i am facing...
This is my js file for using with PhantomJS. It simple tell it to open a page and take screenshots of it and store them in stdout.
var page = require("webpage").create();
page.viewportSize = { width: 640, height: 480 };
page.open("http://www.goodboydigital.com/pixijs/examples/12-2/", function() {
setInterval(function() {
page.render("/dev/stdout", { format: "png" });
}, 25);
});
And this is the cmd command I'm running to receive the captured images in ffmpeg in Windows Command Prompt.
phantomjs runner.js | ffmpeg -y -c:v png -f image2pipe -r 25 -t 10 -i - -c:v libx264 -pix_fmt yuv420p -movflags +faststart dragon.mp4
This command successfully starts the processes of PhantomJS and ffmpeg. But nothing happens for quite some time, after 15 minutes it gives an error saying:
"Failed to reallocate parser buffer"
thats it. I have referenced this code from this site on which the developer claims that it works
https://mindthecode.com/recording-a-website-with-phantomjs-and-ffmpeg/
Please see the attached Image for more explanation.
Image of Code
It could be related to stdout the ffmpeg process as it is being stdin through the pipe and after taking continuous image buffer is filled up and gives error.
You can review this from a well organized canvas recording application "puppeteer-recorder" on nodeJS
https://github.com/clipisode/puppeteer-recorder
I'm trying to do gapless playback of segments generated using ffmpeg:
I use ffmpeg to encode 3 files from a source with exactly 240000 samples # 48kHz, i.e. 5 seconds.
ffmpeg -i tone.wav -af atrim=start_sample=24000*0:end_sample=240000*1 -c:a opus 0.webm
ffmpeg -i tone.wav -af atrim=start_sample=24000*1:end_sample=240000*2 -c:a opus 1.webm
ffmpeg -i tone.wav -af atrim=start_sample=24000*2:end_sample=240000*3 -c:a opus 2.webm
When looking at the meta data (using ffprobe and ffmpeg -loglevel debug) from the file I get the following which seems to me inconsistent values:
Duration: 5.01,
Start 0.007
discard 648/900 samples
240312 samples decoded
If I have several of these files how would I play them seamlessly without gaps?
i.e. in a browser I've tried:
sourceBuffer.timestampOffset = 5 * n - 648/48000;
sourceBuffer.appendWindowStart = 5 * n;
sourceBuffer.appendWindowEnd = 5 * (n+1);
sourceBuffer.appendBuffer(new Uint8Array(buffer[n]));
However, there are audible gaps.
How many samples am I actually supposed to discard? 0.007 * 48000, 648, or 240312 - 240000?
Here is a html page which can be opened in Chrome to test.
You need a simple http server to run it:
<< ls
>> index.html 0.webm 1.webm 2.webm
<< npm install -g http-server
<< http-server --cors
I understand emscripten is a super powerful way to encode C code into Javascript.
Is it possible to use this for video, capture the webcam and stream this over RTMP using something like the rtmpdump library?
rtmpdump can be recompiled to javascript using Emscripten. However, that does not guarantee that the recompiled code is capable of executing within a Javascript environment in the way that the RTMP spec requires (namely the requirement for TCP).
Steps used to recompile rtmpdump with Emscripten:
Obtain latest portable emscripten tools:
Obtain rtmpdump source:
git clone git://git.ffmpeg.org/rtmpdump
Clear make cache
make clean
Set C compiler to CC in Makefile
Edit the rtmpdump Makefile on line 5 to the following:
CC=$(CROSS_COMPILE)cc
Run emmake to create bytecode from make output:
emmake make CRYPTO=
(Per rtmpdump README, I opted to use 'CRYPTO=' to build without SSL support as it was giving errors)
Run emcc to compile and link resulting bytecode into javascript:
emcc -01 ./librtmp/*.o rtmpdump.o -o rtmpdump.js
Run the recompiled rtpmpdump.js:
chmod 755 rtmpdump.js
node rtmpdump.js -r rtmp://127.0.0.1/live/STREAM_NAME
Of course, we will need a live RTMP stream to test against.
Steps to create live RTMP stream:
Obtain latest node-rtsp-rtmp-server:
git clone
https://github.com/iizukanao/node-rtsp-rtmp-server.git
Add an mp4 to livestream over RTMP:
(Using Big Buck Bunny as our test video)
cd node-rtps-rtmp-server/
npm install -d
cd file/
wget http://download.bl4ender.org/peach/bigbuckbunny_movies/BigBuckBunny_320x180.mp4
Start the RTMP server
sudo coffee server.coffee
Publish mp4 to RTMP server with ffmpeg
ffmpeg -re -i /node-rtsp-rtmp-server/file/BigBuckBunny_320x180.mp4 -c:v copy -c:a copy -f flv rtmp://localhost/live/STREAM_NAME
Observations
You should be able to confirm that the RTMP stream is successfully published by connecting with something like VLC Media Player. Once we confirm the stream is properly running, we can test rtmpdump.js with:
node rtmpdump.js -4 rtmp://127.0.0.1/live/STREAM_NAME -o out.flv
However, we immediately encounter:
ERROR: RTMP_Connect0, failed to connect socket. 113 (Host is unreachable)
Conclusion
While my answer explores a path to recompiling rtmpdump and it's supporting libraries (librtmp) to Javascript, it does not produce a working implementation.
Some quick research concludes that RTMP relies on TCP communication for transmission from server to client. Javascript by nature, confines communication to XHR and WebSocket requests only. The steps I have outlined for recompilation of rtmpdump produces XHR requests for the RTMP_Connect0 method which are HTTP based (i.e. != TCP). It may be possible to rewrite an RTMP client to use websockets and pass those connections over to TCP using something like WebSockify, however, if successful you would move the RTMP's dependency on flash to a dependency on Websockify if you intend to consume an RTMP stream. Producing a flashless RTMP client does not appear to be a simple matter of recompiling RTMP to Javascript as the transport mechanism (TCP) must be accounted for.
Notes
For anyone looking to pick up on this work, be aware that testing against a remote stream from a browser running a theoretically proper rtmp implementation in Javascript would require that CORS is enabled on the remote host due to Same-Origin-Policy. See: https://github.com/Bilibili/flv.js/blob/master/docs/cors.md