In order to fully implement my custom html5 video player, I need the the exact frame rate of a video. However I have not been able to find it yet and am using a standard value of 25.
Typically videos have a frame rate value in meta-data so I accessed meta-data using something like this:
var vid = document.getElementById("myVideo");
vid.onloadedmetadata = function(e) {
console.log(e);
};
However I can't find frame rate here. Maybe I am not reading metadata at all.
I can use your help.
Thanks!
Try https://mediainfo.js.org (github)
It works on ui only, no backend needed
I just implemented it and it looks like it worked perfectly fine (at least in Chrome v 70.0.3538.77) for gettting wide media information
It looks like modern browsers beginning to work with some binary libraries
I'm 95% sure the standard html5 video api does not expose the fps information, from what I've read in the past months - other apis like MPEG-DASH and jwplayer do present more / different data.
Your best bet would be to snoop around w3schools.com/tags/ref_av_dom.asp and similar mdn pages.
You can calculate this in realtime yourself and it should work most of the time but I can imagine there's a case or two when it wouldn't. Look at PresentedFrames and then do something like:
fps = video.time / PresentedFrames
view more about PresentedFrames here (currently proposal) and similar attributes at the same link.
mediainfo.js works pretty good - even if used locally in a browser using 'http(s)://'.
To use it locally, just make sure you also download the accompanying mediainfo.wasm and put it into the same directory as mediainfo.min.js.
Alternatively you can install media-info using npm.
The only caveat is, that it doesn't run from the 'file://' protocol.
Related
How can I capture the datastream of a JS / Leaflet animation and download it to MP4?
I am looking for output that looks something like the smooth path traced in these demos:
https://github.com/IvanSanchez/Leaflet.Polyline.SnakeAnim
Their author appears to have done them in ffcast or some screencasting softare.
However, I am looking for an automated solution that can be run as script, ideally one that works on the data stream itself (not the screen), perhaps with a headless browser.
I have tried puppeteer-gif and puppeteer-gif-cast but the best frame rate is jumpy.
I have tried WebRTC-Experiment but it requires me to set manual permissions. Ditto the Screen Capture API mentioned here, though this at least seems to work on the data stream itself.
The canvas captureStream method combined with the MediaRecorder API should do the trick.
Mind you that Chrome only supports webm as a container format (but does record h264) so you might need a postprocessing step with ffmpeg.
Is single stream audio (or video) via Chrome's WebRTC possible when you strip a=ssrc lines from the SDP?
I have tried filtering out a=ssrc lines (with the code below), but single stream audio did not work. I tried also single stream video and renaming instead of removing lines with the same result. I modify both offer and answer SDPs. Interestingly, this filtering works when you try sending SDPs with both audio & video - audio (only) will work in such scenario. However I had issues with re-negotiation in such scenario in our app, so this is probably not a valid solution.
You can see minimum example with the single stream audio / video in this repo: https://github.com/Tev-work/webrtc-audio-demo.
If it is possible, can you please provide minimal example of code with working audio? Preferably using the repo above, what should the modifySdp function (in public/client.js) do?
Currently it modifies sdp with this code:
sdp = sdp.replace(/a=ssrc/g, 'a=xssrc');
sdp = sdp.replace(/a=msid-semantic/g, 'a=xmsid-semantic');
sdp = sdp.replace(/a=mid/g, 'a=xmid');
sdp = sdp.replace(/a=group:BUNDLE/g, 'a=xgroup:BUNDLE');
If it is not possible, do you know whether such limitation has been officialy stated somewhere (please link it), or it just at some point became unworkable? It seems like it was working before (around M29, see comments here https://bugs.chromium.org/p/webrtc/issues/detail?id=1941 - no mention that this was not supposed to be working).
Motivation: We are sometimes sending SDPs via SIP PBXs, which sometimes filter out SSRC lines. Supporting multiple streams in such situations is obviously out of question (maybe with some server side hacking streams?), but supporting at least audio-only for such scenarios would be useful for us.
that should still be possible, even though there are some side-effects like (legacy) getStats not recognizing the stream, see (this bug)[https://bugs.chromium.org/p/webrtc/issues/detail?id=3342].
What you are attempting is to remove the a=ssrc lines before calling setLocalDescription. This is probably not going to work. If you want to simulate the scenario try removing them before calling setRemoteDescription with the SDP.
I am using the this within an audio experiment of mine.
audiometa: function(){
channels = audio.mozChannels;
rate = audio.mozSampleRate;
frameBufferLength = audio.mozFrameBufferLength;
fft = new FFT(frameBufferLength / channels, rate);
},
For some reason, mozChannels/mozSampleRate and mozFrameBufferLength is undefined using the latest version of Firefox. Reading the cos, I can't explain myself, why this could happen.
Is there something within the about:config page which I need to turn on? (Have tried it local and on a webserver)
By the way, I am using this example.
https://wiki.mozilla.org/Audio_Data_API#Reading_Audio
Thanks
It looks like Firefox no longer supports mozChannels, mozSampleRate and mozFrameBufferLength. That would cause an undefined error. The link to the doc you provided has a note that says that API has been deprecated in favor of the W3C Web Audio API. And I searched through the Firefox codebase here:
https://dxr.mozilla.org/mozilla-central/source/
And those properties do not appear in the Firefox code. I suggest you try using the W3C Web Audio API:
https://webaudio.github.io/web-audio-api/
https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API
Has anybody out there got working sample code that synthesizes (and plays) audio using HTML5/Javascript on Mobile Safari on the iPad? I have found some examples for javascript-based sound synthesis on the web, but they all seem to work in Firefox only.
Recently I came across this js-library, I think this is what you want ->
https://github.com/oampo/Audiolet
Here is an example that works for me on an iPad:
www.cse.usf.edu/~turnerr/sound_demo.html
You can download the files from http://www.cse.usf.edu/~turnerr/Downloads/Sound_Demo.zip
This demo is on a Unix based server. But I have not been been able to get the same code to work on an IIS server. Hoping someone can provide some help with IIS.
You may be able to use generated data URIs of uniform length, such as 0.1 seconds. This would give you 1/10 of a second delay and you would generate that many "frames" of audio. I'm not sure entirely what formats the iPad supports, but I read it supports uncompressed WAV. Info on this file format is pretty easy to get, I remember generating WAV files a long time ago with some primitive byte manipulation methods.
Please post back with details!
I'm answering a very old question... but modern webkit browsers now support the Web Audio API. I wrote a very simple fiddle that generates chords using sine waves. There are only 4 built-in wave forms, but you can build your own using Fourier coefficients (array of numbers). You have to generate a new oscillator object for each note. They are single use objects. By connecting multiple oscillators to the same destination, you get polyphonic sounds.
let audio = new(window.AudioContext || window.webkitAudioContext)();
let s1 = audio.createOscillator();
let g1 = audio.createGain();
s1.type = 'sine';
s1.frequency.value = 600;
s1.start();
g1.gain.value = 0.5;
g1.connect(audio.destination);
s1.connect(g1);
I am writing a small application with Qt 4.6 (64-bit Arch Linux, though that shouldn't matter) which lets the user edit a document using a QWebView with contentEditable turned on. However, for some reason embedding an image does not work. Here is a code snippet:
void LeafEditView::onInsertImage()
{
// bring up a dialog, ask for an image
QString imagePath = QFileDialog::getOpenFileName(this,tr("Open Image File"),"/",tr("Images (*.png *.xpm *.jpg)"));
ui->leafEditor->page()->mainFrame()->documentElement().evaluateJavaScript("document.execCommand('insertImage',null,'"+imagePath+"');");
}
The test image does in fact exist and yet absolutely nothing happens. Bold / italics / underline all work fine via JavaScript, just not images. Thoughts?
Check that QWebSettings::AutoLoadImages is enabled.
You could also try:
document.execCommand('insertImage',false,'"+imagePath+"');
Try using relative vs absolute paths to the image.
Last but not least, poke around this sample application -- they are using a similar method of Javascript execCommand(), they do some things in a slightly different way such as using QUrl::fromLocalFile.
Best of luck!
It turns out that WebKit has a policy of not loading resources from the local filesystem without some massaging. In my code, I have a WebKit view which I'm using to edit leaves in a notebook. The following one-liner solved my issue:
ui->leafEditor->page()->mainFrame()->setHtml("<html><head></head><body></body></html>",QUrl("file:///"));
From what I gleaned by lurking around the WebKit mailing list archives, in order to load files from the local filesystem one must set the URI to be file:, and this does the job.