Capturing data stream of a JS / Leaflet animation as MP4 - javascript

How can I capture the datastream of a JS / Leaflet animation and download it to MP4?
I am looking for output that looks something like the smooth path traced in these demos:
https://github.com/IvanSanchez/Leaflet.Polyline.SnakeAnim
Their author appears to have done them in ffcast or some screencasting softare.
However, I am looking for an automated solution that can be run as script, ideally one that works on the data stream itself (not the screen), perhaps with a headless browser.
I have tried puppeteer-gif and puppeteer-gif-cast but the best frame rate is jumpy.
I have tried WebRTC-Experiment but it requires me to set manual permissions. Ditto the Screen Capture API mentioned here, though this at least seems to work on the data stream itself.

The canvas captureStream method combined with the MediaRecorder API should do the trick.
Mind you that Chrome only supports webm as a container format (but does record h264) so you might need a postprocessing step with ffmpeg.

Related

Async fetch & decode of .wav file works on desktop, but not on mobile (WebAudioAPI)

I'm currently working on a web-based synthesizer program, using WebAudioAPI. It works quite well on desktop, but on mobile it seems to fail on loading & decoding the impulse response from a wav file.
The impulse response is used in conjunction with a convolver node, to implement a convolution reverb. Oddly enough, everything else works fine on mobile - oscillators, waveshapers, gain nodes, etc, and the application itself does not stop running (as it would if there were an unhandled exception). It's almost as if the file itself is not being decoded or loaded into the convolver node properly - when I send my sources to the node, on mobile, it outputs only silence.
Here is the code I have for performing these tasks:
//calculate reverb impulse response & assign to convolver node buffer
calcIR();
async function calcIR() {
let wavFile = await fetch("./wavData/ir4.wav");
let wavBuffer = await wavFile.arrayBuffer();
voice1.reverb.buffer = await synthCtx.decodeAudioData(wavBuffer);
}
in this case, buffer is the property of the convolver node which holds the decoded impulse response data, & synthCtx is my web audio context; ir4.wav is the actual impulse response. The async function call follows the synchronous instantiation of voice1, to make sure that the nodes are actually initialized before trying to change one of their properties.
I have not stopped playing around & troubleshooting, and I have a few more things I will probably try, but I figured I'd consult SE just to see if anyone here has an idea: ] also, just to clarify, I have indeed tested it on mobile, and I'm calling the resume() function on the context from a touchEnd event, so I know the issue isn't the context being muted. As above, everything works on mobile & sounds great - save for silence from the convolver node.
Any help is appreciated!
EDIT:
I've created a small jsfiddle demonstration to illustrate the problem I'm having.
The demo uses only a single square wave, routed into two gain nodes - one to the output (destination), and one to the convolver node (which is in turn routed to the destination). A slider controls the relative mix between the two.
fully left is fully dry (100% directly to output, 0% to convolver)
fully right is fully wet (0% directly to output, 100% to convolver)
Oddly enough, it seems I'm having the same problem in the fiddle, though it works perfectly well inside of my app. You can hear that the oscillator fades away into silence as the slider is moved, when it should be slowly fading in the reverberated signal.
I've tried to retain similarity between implementations, ie, I am calling the calcIR() function at the exact same place in my demo as I am within the actual app (just after finishing the connections, and just before starting the oscillators).
I also tried throwing some console.log()'s within the async function, just to see how far it is progressing - seems it's not even getting past the fetch stage. I've also confirmed the wav file availability is not the issue (hosted in a gcloud storage bucket, works elsewhere without any problems).
I admit I am still fairly new to async/await & working with promises in general, so I may just be misunderstanding something here. However, I'm not sure why one way would work in one place just fine and not at all in the other, though it certainly wouldn't be the first time.

Single stream audio/video in Chrome WebRTC without ssrc tags

Is single stream audio (or video) via Chrome's WebRTC possible when you strip a=ssrc lines from the SDP?
I have tried filtering out a=ssrc lines (with the code below), but single stream audio did not work. I tried also single stream video and renaming instead of removing lines with the same result. I modify both offer and answer SDPs. Interestingly, this filtering works when you try sending SDPs with both audio & video - audio (only) will work in such scenario. However I had issues with re-negotiation in such scenario in our app, so this is probably not a valid solution.
You can see minimum example with the single stream audio / video in this repo: https://github.com/Tev-work/webrtc-audio-demo.
If it is possible, can you please provide minimal example of code with working audio? Preferably using the repo above, what should the modifySdp function (in public/client.js) do?
Currently it modifies sdp with this code:
sdp = sdp.replace(/a=ssrc/g, 'a=xssrc');
sdp = sdp.replace(/a=msid-semantic/g, 'a=xmsid-semantic');
sdp = sdp.replace(/a=mid/g, 'a=xmid');
sdp = sdp.replace(/a=group:BUNDLE/g, 'a=xgroup:BUNDLE');
If it is not possible, do you know whether such limitation has been officialy stated somewhere (please link it), or it just at some point became unworkable? It seems like it was working before (around M29, see comments here https://bugs.chromium.org/p/webrtc/issues/detail?id=1941 - no mention that this was not supposed to be working).
Motivation: We are sometimes sending SDPs via SIP PBXs, which sometimes filter out SSRC lines. Supporting multiple streams in such situations is obviously out of question (maybe with some server side hacking streams?), but supporting at least audio-only for such scenarios would be useful for us.
that should still be possible, even though there are some side-effects like (legacy) getStats not recognizing the stream, see (this bug)[https://bugs.chromium.org/p/webrtc/issues/detail?id=3342].
What you are attempting is to remove the a=ssrc lines before calling setLocalDescription. This is probably not going to work. If you want to simulate the scenario try removing them before calling setRemoteDescription with the SDP.

Capture video with alpha channel using canvas.captureStream()

I'm trying to capture the contents of a canvas element with alpha channel. When I do it I get the RGB values correctly but the Alpha channel seems to get dropped when playing back the resulting video. Is there a way to achieve this?
I'm running the following code:
var stream = canvas.captureStream(60);
var recorder = new MediaRecorder(stream);
recorder.addEventListener('dataavailable', finishCapturing);
recorder.addEventListener('stop', function(e) {
video.oncanplay = video.play;
video.src = URL.createObjectURL(new Blob(blobs, {type:"video/webm; codecs=vp9"}));
});
startCapturing();
recorder.start();
Here's a plnkr demonstrating the issue:
http://plnkr.co/edit/z3UL9ikmn6PtVoAHvY0B?p=preview
There is currently no options to enable (VP8/9 transparency channel) from the MediaRecorder API.
One could maybe open an issue on the W3C Mediacapture-Record git repo.
For this, I can guess a few reasons :
From what I understand, webm alpha channel is grossly an hack from chrome, and is not really implemented in the codec itself, nor completely stabilized.
MediaRecorder should be able to encode in many formats even if current implementations only support video webm/VP8 and webm/VP9 (chrome only). So it would mean that they would have to somehow keep the alpha channel in the raw stream, only for this new canvas.captureStream method. Historically, MediaStream mainly came from getUserMedia interface, and there were no need way of getting transparency from there.
[edit: Specs have changed since this answer was written, and MediaStreams should now keep the alpha channel, even if the consumer may not be able to use it, also Chrome now supports more video codecs.]
Chrome, which is the only one to support YUVA webm display in its stable channel (FF supports it in nightly 54), is still not able to include the duration inside their recorded files, let's them fix this before adding the hackish alpha_mode=true.
However, you can achieve it yourself kind of easily :
If you really want a transparent webm file (only readable in chrome and FF nightly), you can use a second canvas to do the recording, set its background (using fillRect) to a chroma that won't appear elsewhere in your drawings, draw the original one on it and record its stream. Once recorded, use ffmpeg to reencode the recorded video, this time with the alpha channel :
// all #00FF00 pixels will become transparent.
ffmpeg -i in.webm -c:v libvpx -vf "chromakey=0x00ff00:0.1:0.1,format=yuva420p" -auto-alt-ref 0 out.webm
I personally needed the -auto-alt-ref 0 flag, not sure everyone needs it though
But because of this other chrome bug, you'll actually have to append this other canvas on screen too, and hide it with css (opacity: 0; width:0px; height:0px; should do).
TL;DR
This API's implementations are far from being stabilized, no one has made the request for such a feature yet, it may come in near future though, and can be done server-side for the time being.

Where in metadata of a video in html5 is the fps saved?

In order to fully implement my custom html5 video player, I need the the exact frame rate of a video. However I have not been able to find it yet and am using a standard value of 25.
Typically videos have a frame rate value in meta-data so I accessed meta-data using something like this:
var vid = document.getElementById("myVideo");
vid.onloadedmetadata = function(e) {
console.log(e);
};
However I can't find frame rate here. Maybe I am not reading metadata at all.
I can use your help.
Thanks!
Try https://mediainfo.js.org (github)
It works on ui only, no backend needed
I just implemented it and it looks like it worked perfectly fine (at least in Chrome v 70.0.3538.77) for gettting wide media information
It looks like modern browsers beginning to work with some binary libraries
I'm 95% sure the standard html5 video api does not expose the fps information, from what I've read in the past months - other apis like MPEG-DASH and jwplayer do present more / different data.
Your best bet would be to snoop around w3schools.com/tags/ref_av_dom.asp and similar mdn pages.
You can calculate this in realtime yourself and it should work most of the time but I can imagine there's a case or two when it wouldn't. Look at PresentedFrames and then do something like:
fps = video.time / PresentedFrames
view more about PresentedFrames here (currently proposal) and similar attributes at the same link.
mediainfo.js works pretty good - even if used locally in a browser using 'http(s)://'.
To use it locally, just make sure you also download the accompanying mediainfo.wasm and put it into the same directory as mediainfo.min.js.
Alternatively you can install media-info using npm.
The only caveat is, that it doesn't run from the 'file://' protocol.

Embedding image in QWebView with JavaScript

I am writing a small application with Qt 4.6 (64-bit Arch Linux, though that shouldn't matter) which lets the user edit a document using a QWebView with contentEditable turned on. However, for some reason embedding an image does not work. Here is a code snippet:
void LeafEditView::onInsertImage()
{
// bring up a dialog, ask for an image
QString imagePath = QFileDialog::getOpenFileName(this,tr("Open Image File"),"/",tr("Images (*.png *.xpm *.jpg)"));
ui->leafEditor->page()->mainFrame()->documentElement().evaluateJavaScript("document.execCommand('insertImage',null,'"+imagePath+"');");
}
The test image does in fact exist and yet absolutely nothing happens. Bold / italics / underline all work fine via JavaScript, just not images. Thoughts?
Check that QWebSettings::AutoLoadImages is enabled.
You could also try:
document.execCommand('insertImage',false,'"+imagePath+"');
Try using relative vs absolute paths to the image.
Last but not least, poke around this sample application -- they are using a similar method of Javascript execCommand(), they do some things in a slightly different way such as using QUrl::fromLocalFile.
Best of luck!
It turns out that WebKit has a policy of not loading resources from the local filesystem without some massaging. In my code, I have a WebKit view which I'm using to edit leaves in a notebook. The following one-liner solved my issue:
ui->leafEditor->page()->mainFrame()->setHtml("<html><head></head><body></body></html>",QUrl("file:///"));
From what I gleaned by lurking around the WebKit mailing list archives, in order to load files from the local filesystem one must set the URI to be file:, and this does the job.

Categories