Using xvfb to run a browser window and want to do a screen recording of that Doing
xvfb-run Firefox http://google.com
ffmpeg -y -r 30 -f x11grab -i :94.0 output.mp4
Getting output as if colors are washed out
Using this option while starting xvfb works
Xvfb :1 -screen 0 1600x1200x24+32
Related
I would like the console output of my web application to a centralized logging system. I am very flexible on the recipient side (syslog|whatever server).
There are plenty of packages to send messages to servers (and a quick solution would be to fetch to an ELK stack for instance).
This is not what I am looking for: I would like the console errors/warnings to be forwarded (and in addition the application-generated ones, but that part is easy).
Is there a consensual solution for this problem?
More of a POC than a solution perhaps but Remote Debuging Protocol can be used. With Firefox
1- Start a debugging server, the console of this instance will be captured. Pages to debug should be opened here. Dev tools must be opened.
kdesu -u lmc -c "firefox --start-debugger-server 9123"
2- Start the debug Firefox client by opening about:debugging and connecting to the debug server. This instance might work a receiver for the packets only.
3- Use tshark to capture packets. Output can be sent to the logging facility.
tshark -i lo -f 'tcp port 9123' -Y 'data.data contains "pageError"' -T ek -e tcp.payload | jq -r '.layers.tcp_payload[0]' - | head -n2 | xxd -r -p
Capturing on 'Loopback: lo'
Output
These json object might be sent to logging facility.
3 682:{"type":"pageError","pageError":{"errorMessage":"This page uses the non standard property “zoom”. Consider using calc() in the relevant property values, or using “transform” along with “transform-origin: 0 0”.","errorMessageName":"","sourceName":"https://www.acordesdcanciones.com/2020/12/la-abandone-y-no-sabia-chino-hidalgo.html","sourceId":null,"lineText":"","lineNumber":0,"columnNumber":0,"category":"Layout","innerWindowID":10737418988,"timeStamp":1629339439629,"warning":true,"error":false,"info":false,"private":false,"stacktrace":null,"notes":null,"chromeContext":false,"cssSelectors":"","isPromiseRejection":false},"from":"server1.conn1.child83/consoleActor2"}
Another tshark one-liner
tshark -i lo -f 'tcp port 9123' -Y 'data.data contains "pageError"' -T fields -e data.data | tail -n +6 | head -n3 | xxd -r -p >> data.json
I am creating a streaming app like youtube while I am creating it I am facing many challenges related to different quality video converting.
My question is
Should I convert orginal video file into multiple video file (like 240p, 480p and 720p) and storage them? Or there is anyway where I can create a single video file which can be play in multiple qualities like youtube.
Multiple video files is the way to go. Currently, the most common approach to adaptive streaming is MPEG-DASH. The different video sizes and a MPD manifest, which is like a playlist for the different video sizes, can be generated using ffmpeg and mp4box. Many videoplayers, e.g. Video.js or Dash.js support adaptive streaming with MPEG-DASH.
Generate video files:
ffmpeg -y -i movie.avi -an -c:v libx264 -x264opts 'keyint=24:min-keyint=24:no-scenecut' -b:v 1500k -maxrate 1500k -bufsize 3000k -vf "scale=-1:720" movie-720.mp4
ffmpeg -y -i movie.avi -an -c:v libx264 -x264opts 'keyint=24:min-keyint=24:no-scenecut' -b:v 800k -maxrate 800k -bufsize 1600k -vf "scale=-1:540" movie-540.mp4
ffmpeg -y -i movie.avi -an -c:v libx264 -x264opts 'keyint=24:min-keyint=24:no-scenecut' -b:v 400k -maxrate 400k -bufsize 800k -vf "scale=-1:360" movie-360.mp4
ffmpeg -y -i movie.avi -vn -c:a aac -b:a 128k movie.m4a
Generate the manifest:
mp4box -dash-strict 2000 -rap -frag-rap -bs-switching no -profile "dashavc264:live" -out movie-dash.mpd movie-720.mp4 movie-540.mp4 movie-360.mp4 movie.m4a
Original source: https://gist.github.com/andriika/8da427632cf6027a3e0036415cce5f54
Let me Explain by my Code what issue i am facing...
This is my js file for using with PhantomJS. It simple tell it to open a page and take screenshots of it and store them in stdout.
var page = require("webpage").create();
page.viewportSize = { width: 640, height: 480 };
page.open("http://www.goodboydigital.com/pixijs/examples/12-2/", function() {
setInterval(function() {
page.render("/dev/stdout", { format: "png" });
}, 25);
});
And this is the cmd command I'm running to receive the captured images in ffmpeg in Windows Command Prompt.
phantomjs runner.js | ffmpeg -y -c:v png -f image2pipe -r 25 -t 10 -i - -c:v libx264 -pix_fmt yuv420p -movflags +faststart dragon.mp4
This command successfully starts the processes of PhantomJS and ffmpeg. But nothing happens for quite some time, after 15 minutes it gives an error saying:
"Failed to reallocate parser buffer"
thats it. I have referenced this code from this site on which the developer claims that it works
https://mindthecode.com/recording-a-website-with-phantomjs-and-ffmpeg/
Please see the attached Image for more explanation.
Image of Code
It could be related to stdout the ffmpeg process as it is being stdin through the pipe and after taking continuous image buffer is filled up and gives error.
You can review this from a well organized canvas recording application "puppeteer-recorder" on nodeJS
https://github.com/clipisode/puppeteer-recorder
I'm trying to do gapless playback of segments generated using ffmpeg:
I use ffmpeg to encode 3 files from a source with exactly 240000 samples # 48kHz, i.e. 5 seconds.
ffmpeg -i tone.wav -af atrim=start_sample=24000*0:end_sample=240000*1 -c:a opus 0.webm
ffmpeg -i tone.wav -af atrim=start_sample=24000*1:end_sample=240000*2 -c:a opus 1.webm
ffmpeg -i tone.wav -af atrim=start_sample=24000*2:end_sample=240000*3 -c:a opus 2.webm
When looking at the meta data (using ffprobe and ffmpeg -loglevel debug) from the file I get the following which seems to me inconsistent values:
Duration: 5.01,
Start 0.007
discard 648/900 samples
240312 samples decoded
If I have several of these files how would I play them seamlessly without gaps?
i.e. in a browser I've tried:
sourceBuffer.timestampOffset = 5 * n - 648/48000;
sourceBuffer.appendWindowStart = 5 * n;
sourceBuffer.appendWindowEnd = 5 * (n+1);
sourceBuffer.appendBuffer(new Uint8Array(buffer[n]));
However, there are audible gaps.
How many samples am I actually supposed to discard? 0.007 * 48000, 648, or 240312 - 240000?
Here is a html page which can be opened in Chrome to test.
You need a simple http server to run it:
<< ls
>> index.html 0.webm 1.webm 2.webm
<< npm install -g http-server
<< http-server --cors
I'm new to a lot of things that YETI requires to run, and I've made it through most of the steps to get it to work. I have installed cygwin so I can run node.js and npm (I used these instructions). Once done, I ran npm install yeti, and yeti installed. Now I can type things like this:
This is where I'm having problems. I can't figure out how to get yeti to run the tests in demo.html. I can open up my browser to file:///C:/test/demo.html and I can see the tests run (it's a YUI Test) so I know that the problem is not demo.html being broken. Also, when I try to run yeti as a server (yeti --server), It sits there after the line "to run and report the results" and doesn't let me do anything unless I exit with ctrl-c, although I can go to localhost:8000 and see this:
If I try opening up a new cygwin console and doing this:
It gives me a bunch of errors that I don't understand.
Help!
How I did it on ubuntu:
First install node dependencies. Only install dependencies using apt-get
You need at least:
sudo apt-get install build-essential libssl-dev python2.6
Also this link could be helpfull => http://howtonode.org/how-to-install-nodejs (see ubuntu instructions).
Next install node/npm the correct way on ubuntu.
echo 'export PATH=$HOME/local/bin:$PATH' >> ~/.bashrc
. ~/.bashrc
mkdir ~/local
mkdir ~/node-latest-install
cd ~/node-latest-install
curl http://nodejs.org/dist/node-latest.tar.gz | tar xz --strip-components=1
./configure --prefix=~/local
make install # ok, fine, this step probably takes more than 30 seconds...
close terminal and open it again
curl http://npmjs.org/install.sh | sh
After that install yeti issuing: $ npm install yeti#stable
Run yeti issuing from terminal:
alfred#alfred-laptop:~/node/stackoverflow/4833633$ yeti
Yeti will only serve files inside /home/alfred/node/stackoverflow/4833633
Visit http://localhost:8000, then run:
yeti
to run and report the results.
start the browsers you like. Point the browsers to => http://localhost:8000
inside the folder you started yeti write your tests.
alfred#alfred-laptop:~/node/stackoverflow/4833633$ ls -al
total 16
drwxr-xr-x 2 alfred alfred 4096 2011-01-29 01:47 .
drwxr-xr-x 6 alfred alfred 4096 2011-01-29 01:27 ..
-rw-r--r-- 1 alfred alfred 6140 2011-01-29 01:47 simple.html
See gist for a really simple example. I just copied to example from http://developer.yahoo.com/yui/3/examples/test/test-simple-example_clean.html but removed the <!--MyBlogLog instrumentation--> crap. I also told it not to render console by commenting line 196 => //r.render('#testLogger');(That last is not even necessary, but I think tests will run faster that way because it does need to render the console).
Finally I just ran:
alfred#alfred-laptop:~/node/stackoverflow/4833633$ yeti simple.html
Waiting for results. When you're done, hit Ctrl-C to exit.
✔ Example Suite on Chrome (8.0.552.237) / Linux
6 passed, 0 failed
✔ Example Suite on Firefox (3.6.13) / Linux
6 passed, 0 failed
Success :)
Some extra information about my distro
alfred#alfred-laptop:~/node/stackoverflow/4833633$ cat /etc/issue
Ubuntu 10.10 \n \l
alfred#alfred-laptop:~/node/stackoverflow/4833633$ python --version
Python 2.6.6
alfred#alfred-laptop:~/node/stackoverflow/4833633$ node -v
v0.2.6
alfred#alfred-laptop:~/node/stackoverflow/4833633$ npm -v
0.2.15
alfred#alfred-laptop:~/node/stackoverflow/4833633$ npm ls installed | grep yeti
npm info it worked if it ends with ok
npm info using npm#0.2.15
npm info using node#v0.2.6
yeti#0.1.2 The YUI Easy Testing Interface =reid active installed remote stable YUI web app YUITest TDD BDD yui3 test
npm ok