I've been researching for this for a minute but I'm not getting a straight answer to my exact question. I'm trying to understand the process behind video players switching video quality (480p, 720p, 1080p, etc.).
In order to achieve that, I am first asking if this is more of a front-end thing or back-end thing? And to illustrate the first answer, does one:
A) Upload one video file to a server (S3/Google Cloud) at the highest quality, then use a video tag in HTML and add a property to specify which quality is desired?
B) Upload one video file to a server (S3/Google Cloud) at the highest quality, then use JS to control the playback quality?
C) Split highest quality uploaded file into multiple files with different streaming quality using server code, then use front-end JS to choose which quality is needed?
D) Realize that this is way more work than it seems and should leave it to professional video streaming services, such as JWPlayer?
Or am I not seeing another option that's possible without a streaming service, without actually building a streaming service?
If the answer is pretty much D, what service do you recommend?
Note: I know YouTube and Vimeo can handle this but I'm not trying to have that kind of overhead.
It is answer 'C' as noted in the comments, and maybe partly answer 'D' also.
You need a video streaming server that supports one of the popular adjustable bitrate streaming protocols (ABR) DASH or HLS. ABR allows the client device or player download the video in chunks, e.g 10 second chunks, and select the next chunk from the bit rate most appropriate to the current network conditions.
There are open source streaming servers available such as GStreamer and licensed ones like Wowza, which you can sue if you want to host the videos yourself.
For some example of ABR see this answer: https://stackoverflow.com/a/42365034/334402
Related
I am working with React and Node. My project is having requirement to merge videos and play it on the player. Is possible anyhow, I can do it either on my React side using some canvas or on the back end side using some module other than ffmpeg?
Just want to show preview nothing else is it possible?
Any help would be much appriciated
Right now what I am doing is playing the videos one by one on the player
{vidoes.map((url) => <ReactPlayer src={url}/>)}
Now what I want to do is to draw the images on the canvas. So for that I am playing all the urls at once but how can I play them into series and save the next canvas images until the before one completes?
To achieve continous playing in browsers for multiple input video files there’s no need for server side processing. Static file serving for multiple video files (we’ll call them chunks from now on) is enough.
At first the .appendBytes() method for the playing buffer of a video player was invented in Flash to allow for switching video streams (or different quality chunks). It was also particularly useful for live video where the next video file doesn’t exist when the playing starts. It also allowed multiple resolution video chunks to play one after the other seamelessly, which, at the time, didn’t work in any other player, including VLC (and I think ffmpeg didn’t have it either).
HTML5 browsers have added an .appendBuffer() method to add video chunks to the currently playing video buffer.
It allows you to hands-on pre-load whatever content you need with full control of what gets played and when (you are in control of buffering and what comes next at all times).
You can start dumping video chunks into the playing buffer at any time.
On the server side ffmpeg cannot solve your problem in the same way the browser can as you will have to handle very difficult corner cases where the video resolution, codecs, audio channels, etc. might differ. Also, a browser only solution is vastly superior to doing remuxing or video encoding on the server.
MDN: https://developer.mozilla.org/en-US/docs/Web/API/SourceBuffer/appendBuffer
This will solve all your problems related to video playback of multiple source files.
If you want to do this on the backend, as stated in the comment, you will likely need to include ffmpg. There are some libraries though that make is simpler like fluent-ffmpeg
assuming you already have the files on your node server, you can use something like:
const ffmpeg = require('fluent-ffmpeg');
ffmpeg('/path/to/input1.avi')
.input('/path/to/input2.avi')
.on('error', function(err) {
console.log('An error occurred: ' + err.message);
})
.on('end', function() {
console.log('Merging finished !');
})
.mergeToFile('/path/to/merged.avi', '/path/to/tempDir');
note that this is a simple example taken directly from https://github.com/fluent-ffmpeg/node-fluent-ffmpeg#mergetofilefilename-tmpdir-concatenate-multiple-inputs.
Make sure you read through the prerequisites since it requires ffmpeg to be installed and configured.
There are other aspects that you might need to consider, such as differing codecs and resolutions. Hopefully this gives you a good starting point though.
Pretty sure most, if not all video manipulation libraries use ffmpeg under the hood.
Seeing as you're a react dev, you might appreciate Remotion to manipulate the videos as needed. It doesn't run in the frontend but does have some useful features.
I'm working with mpeg dash to stream live between mp4box and dash.js. The problem arises on the client side when the user pauses for too long time the playhead tries to read a point where the buffer has already passed.
I'can't find how to get a live skin without play pause or seekbar, actually i try video.js to customise the interface, i use the standard configuration, and css configuration don't give me what i need, i wish to know if there is an easy opensource way to display live streaming.
Thanks a lot,
Massimo
So here is my problem. I want to play audio from nodejs running on a raspberry Pi and then adjust the brightness of an LED strip also connected to the same PI based on the frequency readings from the audio file. However I can't seem to find anything in node that gives the same functionality as the WebAudio API AnalyserNode.
I found a few libraries (https://www.npmjs.com/package/audio-render) that come close and are based on Web Audio API but the frequency values it produces are completely incorrect. I verified this by comparing it to a browser version I created using the Web Audio API.
I need the audio to play from node while also being analyzed to affect the brightness levels.
Any help would be appreciated. I really thought this would be simpler to handle in node but 6 hours later and I'm still without a solution.
Victor Dibiya at IBM has an excellent example that illustrates how to use the web-audio-api module to decode an audio file into a buffer array of PCM data from which one can extract amplitude data from sound files and infer beats:
https://github.com/victordibia/beats
I have this working on a Raspberry Pi with LEDs controlled via Fadecandy.
I currently need to extract snapshot(s) from an IP Camera using RTSP on a web page.
VLC Web Plugin works well with playing stream, but before I get my hands dirty on playing with its Javascript API, can some one tell me whether the API can help me to take the snapshot(s) of the stream, like the way it done with VLC Media Player, cuz it does not present on the above page.
If the answer is 'No', please give me some other way to do this.
Thanks in advance.
Dang Loi.
The VLC plugin only provides metadata properties accessible from JavaScript.
For this reason there is no way to access the bitmap/video itself as plugins runs sand-boxed in the browser. The only way to obtain such data would be if the plugin itself provided a mechanism for it.
The only way to grab a frame is therefor to use a generic screen snagger (such as SnagIt), of course, without the ability to control it from JavaScript.
You could, as an option, look into the HTML5 Video element to see if you can use your video source with that. In that case you could grab frames, draw them to canvas and from there save it as an image.
Another option in case the original stream format isn't supported, is to transcode it on the fly to a format supported by the browser. Here is one such transcoder.
I was given 2 MP3 files, one that is 4.5Mb and one that is 5.6Mb. I was instructed to have them play on a website I am managing. I have found a nice, clean looking CSS based jQuery audio player.
My question is, is this the right solution for files that big? I am not sure if the player preloads the file, or streams it? (If that is the correct terminology) I don't deal much with audio players and such...
This player is from happyworm.com/jquery/jplayer/latest/demo-01.htm
Is there another approach I shoudl take to get this to play properly? I don't want it to have to buffer, and the visitor to wait, or slow page loading... etc.etc. I want it to play clean and not affect the visitors session to the site.
The name is a bit misleading - the MP3 playing is done in a Flash component, as in all other similar players, too. The jQuery part of it is the control and customization of the player (which is very nice, I'm not saying anything against the product).
The player should be capable to play an MP3 file while it loads. It's not going to be real streaming (because you can't skip to arbitrary positions) but it should work out all right.
Make sure you test the buffering yourself, using a big MP3 file. Remember to encode the MP3 files according to the rules because otherwise the files will act up, especially in older players.