Video.js live stream setup: help request - javascript

I'm working with mpeg dash to stream live between mp4box and dash.js. The problem arises on the client side when the user pauses for too long time the playhead tries to read a point where the buffer has already passed.
I'can't find how to get a live skin without play pause or seekbar, actually i try video.js to customise the interface, i use the standard configuration, and css configuration don't give me what i need, i wish to know if there is an easy opensource way to display live streaming.
Thanks a lot,
Massimo

Related

Merge two videos on the client or backend side using javascript

I am working with React and Node. My project is having requirement to merge videos and play it on the player. Is possible anyhow, I can do it either on my React side using some canvas or on the back end side using some module other than ffmpeg?
Just want to show preview nothing else is it possible?
Any help would be much appriciated
Right now what I am doing is playing the videos one by one on the player
{vidoes.map((url) => <ReactPlayer src={url}/>)}
Now what I want to do is to draw the images on the canvas. So for that I am playing all the urls at once but how can I play them into series and save the next canvas images until the before one completes?
To achieve continous playing in browsers for multiple input video files there’s no need for server side processing. Static file serving for multiple video files (we’ll call them chunks from now on) is enough.
At first the .appendBytes() method for the playing buffer of a video player was invented in Flash to allow for switching video streams (or different quality chunks). It was also particularly useful for live video where the next video file doesn’t exist when the playing starts. It also allowed multiple resolution video chunks to play one after the other seamelessly, which, at the time, didn’t work in any other player, including VLC (and I think ffmpeg didn’t have it either).
HTML5 browsers have added an .appendBuffer() method to add video chunks to the currently playing video buffer.
It allows you to hands-on pre-load whatever content you need with full control of what gets played and when (you are in control of buffering and what comes next at all times).
You can start dumping video chunks into the playing buffer at any time.
On the server side ffmpeg cannot solve your problem in the same way the browser can as you will have to handle very difficult corner cases where the video resolution, codecs, audio channels, etc. might differ. Also, a browser only solution is vastly superior to doing remuxing or video encoding on the server.
MDN: https://developer.mozilla.org/en-US/docs/Web/API/SourceBuffer/appendBuffer
This will solve all your problems related to video playback of multiple source files.
If you want to do this on the backend, as stated in the comment, you will likely need to include ffmpg. There are some libraries though that make is simpler like fluent-ffmpeg
assuming you already have the files on your node server, you can use something like:
const ffmpeg = require('fluent-ffmpeg');
ffmpeg('/path/to/input1.avi')
.input('/path/to/input2.avi')
.on('error', function(err) {
console.log('An error occurred: ' + err.message);
})
.on('end', function() {
console.log('Merging finished !');
})
.mergeToFile('/path/to/merged.avi', '/path/to/tempDir');
note that this is a simple example taken directly from https://github.com/fluent-ffmpeg/node-fluent-ffmpeg#mergetofilefilename-tmpdir-concatenate-multiple-inputs.
Make sure you read through the prerequisites since it requires ffmpeg to be installed and configured.
There are other aspects that you might need to consider, such as differing codecs and resolutions. Hopefully this gives you a good starting point though.
Pretty sure most, if not all video manipulation libraries use ffmpeg under the hood.
Seeing as you're a react dev, you might appreciate Remotion to manipulate the videos as needed. It doesn't run in the frontend but does have some useful features.

Trim Silence in Front End with Web Audio API or anything else

I am trying to trim leading and trailing silence from an audio file recorded in browser before I send it off to be stored by the server.
I have been looking for examples to better understand the WebAudioApi but examples are scattered and cover depricated methods like the "ScriptProcessorNode" I thought I was close when I found this example
HTML Audio recording until silence?
which I was eager to see at least silence being processed, which I think I can use to subsequently trim. However after loading the example in a sandbox it does not appear to detect silence in a way that I can understand.
If anyone has any help or advice it would be greatly appreciated!
While ScriptProcessorNode is deprecated, it's not going away any time soon. You should use AudioWorkletNode if you can (but not all browsers support that).
But since you have the recorded audio in a file, I would decode it using decodeAudioData to get an AudioBuffer. Then use getChannelData(n) to get a Float32Array for the n'th channel. Analyze this array however you want to determine where the silence at the beginning ends and the silence at the end begins. Do this for each n.
Now you know where the non-silent part is. WebAudio has no way of encoding this audio, so you'll either have to do your own encoding, or maybe get MediaRecorder to encode this so you can send it off to your server.

Trying to understand video playback quality HTML5/JS

I've been researching for this for a minute but I'm not getting a straight answer to my exact question. I'm trying to understand the process behind video players switching video quality (480p, 720p, 1080p, etc.).
In order to achieve that, I am first asking if this is more of a front-end thing or back-end thing? And to illustrate the first answer, does one:
A) Upload one video file to a server (S3/Google Cloud) at the highest quality, then use a video tag in HTML and add a property to specify which quality is desired?
B) Upload one video file to a server (S3/Google Cloud) at the highest quality, then use JS to control the playback quality?
C) Split highest quality uploaded file into multiple files with different streaming quality using server code, then use front-end JS to choose which quality is needed?
D) Realize that this is way more work than it seems and should leave it to professional video streaming services, such as JWPlayer?
Or am I not seeing another option that's possible without a streaming service, without actually building a streaming service?
If the answer is pretty much D, what service do you recommend?
Note: I know YouTube and Vimeo can handle this but I'm not trying to have that kind of overhead.
It is answer 'C' as noted in the comments, and maybe partly answer 'D' also.
You need a video streaming server that supports one of the popular adjustable bitrate streaming protocols (ABR) DASH or HLS. ABR allows the client device or player download the video in chunks, e.g 10 second chunks, and select the next chunk from the bit rate most appropriate to the current network conditions.
There are open source streaming servers available such as GStreamer and licensed ones like Wowza, which you can sue if you want to host the videos yourself.
For some example of ABR see this answer: https://stackoverflow.com/a/42365034/334402

How to take video snapshot with VLC Web Plugin

I currently need to extract snapshot(s) from an IP Camera using RTSP on a web page.
VLC Web Plugin works well with playing stream, but before I get my hands dirty on playing with its Javascript API, can some one tell me whether the API can help me to take the snapshot(s) of the stream, like the way it done with VLC Media Player, cuz it does not present on the above page.
If the answer is 'No', please give me some other way to do this.
Thanks in advance.
Dang Loi.
The VLC plugin only provides metadata properties accessible from JavaScript.
For this reason there is no way to access the bitmap/video itself as plugins runs sand-boxed in the browser. The only way to obtain such data would be if the plugin itself provided a mechanism for it.
The only way to grab a frame is therefor to use a generic screen snagger (such as SnagIt), of course, without the ability to control it from JavaScript.
You could, as an option, look into the HTML5 Video element to see if you can use your video source with that. In that case you could grab frames, draw them to canvas and from there save it as an image.
Another option in case the original stream format isn't supported, is to transcode it on the fly to a format supported by the browser. Here is one such transcoder.

jQuery Audio Player

I was given 2 MP3 files, one that is 4.5Mb and one that is 5.6Mb. I was instructed to have them play on a website I am managing. I have found a nice, clean looking CSS based jQuery audio player.
My question is, is this the right solution for files that big? I am not sure if the player preloads the file, or streams it? (If that is the correct terminology) I don't deal much with audio players and such...
This player is from happyworm.com/jquery/jplayer/latest/demo-01.htm
Is there another approach I shoudl take to get this to play properly? I don't want it to have to buffer, and the visitor to wait, or slow page loading... etc.etc. I want it to play clean and not affect the visitors session to the site.
The name is a bit misleading - the MP3 playing is done in a Flash component, as in all other similar players, too. The jQuery part of it is the control and customization of the player (which is very nice, I'm not saying anything against the product).
The player should be capable to play an MP3 file while it loads. It's not going to be real streaming (because you can't skip to arbitrary positions) but it should work out all right.
Make sure you test the buffering yourself, using a big MP3 file. Remember to encode the MP3 files according to the rules because otherwise the files will act up, especially in older players.

Categories