nodejs ffmpeg play video at specific time and stream it to client - javascript

I'm trying to make a basic online video editor with nodeJS and ffmpeg.
To do this I need 2 steps:
set the in-and-out times of the videos from the client, which requires the client to view the video at specific times, and switch the position of the video. Meaning, if a single video is used as an input, and split it into smaller parts, it needs to replay from the starting time of the next edited segment, if that makes sense.
send the input-output data to nodejs and export it with ffmpeg as a finished vide.
At first I wanted to do 1. purely on the client, then upload the source video(s) to nodeJS, and generate the same result with ffmpeg, and send back the result.
But there are may problems with video processing on the client side in HTML at the moment, so now I have a change of plans: to do all of the processing on the nodeJS server, including the video playing.
This is the part I am stuck at now. I'm aware that ffmpeg can be used in many different ways from nodeJS, but I have not found a way to play a .mp4 webm video in realtime with ffmpeg, at a specific timestamp, and send the streaming video (again, at a certain timestamp) to the client.
I've seen the pipe:1 attribute from ffmpeg, but I couldn't find any tutorials to get it working with an mp4 webm video, and to parse the stdout data somehow with nodejs and send it to the client. And even if I could get that part to work, I still have no idea to play the video, in realtime, at a certain timestamp.
I've also seen ffplay, but that's only for testing as far as I know; I haven't seen any way of getting the video data from it in realtime with nodejs.
So:
how can I play a video, in nodeJS, at a specific time (preferably with ffmpeg), and send it back to the client in realtime?
What I have already seen:
Best approach to real time http streaming to HTML5 video client
Live streaming using FFMPEG to web audio api
Ffmpeg - How to force MJPEG output of whole frames?
ffmpeg: Render webm from stdin using NodeJS
No data written to stdin or stderr from ffmpeg
node.js live streaming ffmpeg stdout to res
Realtime video conversion using nodejs and ffmpeg
Pipe output of ffmpeg using nodejs stdout
can't re-stream using FFMPEG to MP4 HTML5 video
FFmpeg live streaming webm video to multiple http clients over Nodejs
http://www.mobiuso.com/blog/2018/04/18/video-processing-with-node-ffmpeg-and-gearman/
stream mp4 video with node fluent-ffmpeg
How to get specific start & end time in ffmpeg by Node JS?
Live streaming: node-media-server + Dash.js configured for real-time low latency
Low Latency (50ms) Video Streaming with NODE.JS and html5
Server node.js for livestreaming
HLS Streaming using node JS
Stream part of the video to the client
Video streaming with HTML 5 via node.js
Streaming a video file to an html5 video player with Node.js so that the video controls continue to work?
How to (pseudo) stream H.264 video - in a cross browser and html5 way?
Pseudo Streaming an MP4 file
How to stream video data to a video element?
How do I convert an h.264 stream to MP4 using ffmpeg and pipe the result to the client?
https://medium.com/#brianshaler/on-the-fly-video-rendering-with-node-js-and-ffmpeg-165590314f2
node.js live streaming ffmpeg stdout to res
Can Node.js edit video files?

This question is a bit broad, but I've built similar things and will try to answer this in pieces for you:
set the in-and-out times of the videos from the client, which requires the client to view the video at specific times, and switch the position of the video. Meaning, if a single video is used as an input, and split it into smaller parts, it needs to replay from the starting time of the next edited segment, if that makes sense.
Client-side, when you play back, you can simply use multiple HTMLVideoElement instances that reference the same URL.
For the timing, you can manage this yourself using the .currentTime property. However, you'll find that your JavaScript timing isn't going to be perfect. If you know your start/end points at the time of instantiation, you can use Media Fragment URIs:
video.src = 'https://example.com/video.webm#t=5.5,30';
In this example, the video starts at 5.5 seconds, and stops at 30 seconds. You can use the ended event to know when to start playing the next clip. This isn't guaranteed to be perfectly frame-accurate, but is pretty good for something like a live preview.
But there are may problems with video processing on the client side in HTML at the moment, so now I have a change of plans: to do all of the processing on the nodeJS server,...
Not a bad plan, if consistency is important.
... including the video playing.
There is a serious tradeoff you're making here, as far as latency to controlling that video, and quality of preview. I'd suggest a hybrid approach where editing is done client-side, but your final bounce/compositing/whatever is done server-side.
This isn't unlike how desktop video editing software works anyway.
This is the part I am stuck at now. I'm aware that ffmpeg can be used in many different ways from nodeJS, but I have not found a way to play a .mp4 webm video in realtime with ffmpeg, at a specific timestamp, and send the streaming video (again, at a certain timestamp) to the client.
Is it MP4, or is it WebM? Those are two distinct container formats. WebM is easily streamable, as piped directly out of FFmpeg. MP4 requires futzing with the MOOV atom (-movflags faststart), and can be a bit of a hassle.
In any case, sounds like you just need to set timestamps on the input:
ffmpeg -ss 00:01:23 -i video.mp4 -to 00:04:56 -f webm -
I've seen the pipe:1 attribute from ffmpeg, but I couldn't find any tutorials to get it working with an mp4 webm video, and to parse the stdout data somehow with nodejs and send it to the client.
Just use a hyphen - as the output filename and FFmpeg will output to STDOUT. Then, there's nothing else you need to do in your Node.js application... pipe that output directly to the client. Untested, but you're looking for something like this, assuming a typical Express app:
app.get('/stream', (req, res, next) => {
const ffmpeg = child_process.spawn('ffmpeg', [
'-i', 'video.mp4',
'-f', 'webm',
'-'
]);
res.set('Content-Type', 'video/webm'); // TODO: Might want to set your codecs here also
ffmpeg.stdout.pipe(res);
});
And even if I could get that part to work, I still have no idea to play the video, in realtime, at a certain timestamp.
Well, for this, you're just playing a stream so you can just do:
<video src="https://your-nodejs-server.example.com/stream" preload="none" />
The preload="none" part is important, to keep it "live".
An alternative to all of this is to set up a GStreamer pipeline, and probably utilize its built-in WebRTC stack. This is not trivial, but has the advantage of potentially lower latency, and automatic handling of "catching up" to live video from the server. If you use the normal video tag, you'll have to handle that yourself by monitoring the buffered data and managing the playback speed.
I've also seen ffplay...
FFplay isn't relevant to your project.
Hopefully this pile of notes will give you some things to consider and look at.

Related

Inserting meta data into a live video stream

I want to achieve the following, but it remains unclear if this is possible.
The current scenario:
Someone is streaming a video with audio through
OBS to a media server, clients connect through a website.
[OBS Stream/Video Stream] -> [AWS/External Streaming Service] -> Clients
The wanted scenario:
capture this stream through a custom media server
and manipulate it by injecting certain metadata at certain moments
during the livestream. Note the importance of live.
[OBS Stream/Video Stream] -> [My Custom Node.js Server to insert metadata] -> [AWS/External Streaming Service] -> Clients
The idea:
The idea is that I want to synchronize the stream to some popup for example. The default protocol stream seems to be RTMP from OBS, but maybe this can be changed. At a given time during the livestream, an html5 videoplayer on the website can read these tags from the livestream (through some additional library such as video.js) and tell the JS application to show some text. In the end, it boils down to synchronizing the video stream to a text stream (eg from a websocket connection)
Potential solutions:
ID3 tags. I read about ID3 tags in MP3 files, but this does not seem to be what i'm looking as it needs a complete .mp3 file upfront and is not used for streams (Dynamically Inject ID3 in FFMPEG Live Stream). What I want is to dynamically inject metadata into this stream. For example, inject an id at any time (dynamically chosen) which references to a database for example should suffice.
LTC/Linear Time Code/SMPTE is this possible to embed that in a video stream somehow with node.js? that would enable me to match timings with an id on the client.
Is this possible to do given an incoming video stream with audio? and if so, what is the format of the stream and how do I inject metadata?
EDIT: it seems RTMP is not supported without flash in the browser. This is a no-go so I will need to use another stream format such as HLS/FLV?
Sounds like using something like Liquidsoap as your streaming server would do the trick for inserting the metadata into the stream. Plenty of options for manipulating metadata for you to explore.
As for client side decoding you could perhaps use a javascript readable stream within a service worker to split the server output into metadata/video and process as you see fit.
I did a similar thing for processing inband metadata on an infinite mp3 stream which might give you some ideas on where to start. You can find the code for that here

Shoutcast stream spectral plot in Javascript

I am trying to make a live spectral plot of a SHOUTcast audio stream. I have found this page http://www.aerodynes.fr/2014/04/14/a-pure-javascript-audio-waterfall/ of someone doing almost exactly what I would like but with the audio from the sound card. How do I open a SHOUTcast stream for processing in the same way as he did? I can't seem to find info on it in the Web Audio API
// Open the microphone
function init() {
var audioConstraints = {
audio: true
};
getUserMedia(audioConstraints, gotStream);
}
...
Thanks for any advice/info.
You can't directly. You'll have to MacGyver some solution.
The first problem you'll run into is that you cannot connect directly to a SHOUTcast stream from most browsers. While SHOUTcast is essentially HTTP, there is one small difference that breaks compatibility, especially with more modern clients. A normal HTTP server returns a status line in its response like this:
HTTP/1.1 200 OK
SHOUTcast servers return this:
ICY 200 OK
The only way around this (assuming you need to still use SHOUTcast) is to proxy the data server-side while rewriting the response status line.
The next problem is that SHOUTcast/Icecast streams use a codec, usually MP3 or AAC in ADTS, to compress the audio into bandwidth suitable for internet streaming. The Web Audio API deals with floating point PCM samples. You will have to decode the audio stream. While this can often be done in-browser, it depends on the codec you're using. Otherwise, you'll have to do it server-side at which point you might as well do the spectrum analysis server side and stream frequency band values.
I think that the best way to handle this is to get your stream playing in an audio element or object and use that as a Web Audio API node which is then connected to your analyser node to get the spectrum. You'll want to use Icecast for your server, and you will have to transcode your streams to a couple codecs to get broader browser support.

Stream part of the video to the client

Given a windows server backend, is there a way to implement a pure javascript/html5 client that would be able to play only a designated part of the video file (e.g. from 10th second to 15th on a 2 hour video)?
From what I know, standard html5 video tag will download an entire file which is not suitable for my situation.
Streaming solutions on the server would probably be an answer, but are there any that would work with pure javascript/html client? Thanks.
To do this you should encode your video into one of the segmented/fragmented format like MPEG-DASH or Apple HLS. The result will be a playlist file and 1 or more media files containing 2 to 10 second fragments of your (long) video file. For DASH you will normally have 1 fragmented MP4 file containing 2 second fragments of video, the playlist file will tell your player which parts of the file to download corresponding to the time you wish to play. For this to work your web server needs to support HTTP RANGE headers (which most do).
For HLS you will normally end up with multiple 10 second files. The playlist file will tell the player which file to download for the time to play.
Here's how to build a HTML5 player to play DASH streams:
http://blogs.msdn.com/b/interoperability/archive/2014/01/03/mpeg-dash-tutorial-embedding-an-adaptive-streaming-video-within-your-html5-application.aspx
http://www-itec.uni-klu.ac.at/dash/?page_id=746
Besides complex methods like HLS or MPEG-DASH you can consider using pseudo-streaming, or progressive download. Its seeking capability supported by a number of media servers will allow you to watch the MP4 video from any moment. Using Javascript you should be able to actually setup play and stop when you need (but that's up you to deal with different browsers handling playback in HTML5 video container).

Is it possible to export video with comments on overlay of video using popcorn JS?

I am using porpcorn JS for adding annotations on video, I have created overlay on video and all annotations are rendered on video. Is there any way so that I can export video with embedded html content inside an .mp4-file. So I can play that video in any native player like VLC?
You're best of handling it all on the server side and simply playing the rendered video on the client. If the code on the client side is sufficiently complex you can consider the two following options:
Easiest option: Client frame rendering, server video rendering
You can quite easily grab each frame from the video, draw it onto a canvas and next draw the annotations to the same canvas as well (using either custom code or a library like html2canvas). Next the easiest thing to do would be to send all the frames one by one to the server and use a simple ffmpeg command (something like ffmpeg -i img%03d.png -c:v libx264 -pix_fmt yuv420p out.mp4) to generate the mp4 which you would then send back to the client.
Best, but hard option: Client frame rendering, client video rendering
'Of course' actually rendering the video on the client side is not impossible. Do note however that the only library I am aware of does not render .mp4 files, but .webm files. Whether that's a problem is up to you. Either way, the library that is capable of doing this is called whammy.js. Once again you would actually need to draw all the frames and annotations to a canvas which you then encoder.add to the Whammy video object. The API is pretty simple and to the point, however do note that I have no idea about how cross platform its support is.
Short answer: no
Long answer:
The MP4 container can hold XMP metadata so in theory someone could write an exporter but you would still need a player capable of using the XMP metadata and as far as I know VLC doesn't support it.

Capture Audio Input with flash or html5

I am trying to capture the microphone and send the recording to my server.. I tried this method here but it records only a big WAV and the upload can be slow sometimes.
Is there a way to capture the voice and compress it on the client side?
Best method would be to send the recording while recording, but I have no Idea if this is possible. (It works for YouTube Live Webcam recording, it must work for Audio only too..)
Hey check out this post where i replied to a guy with a similar question as you.
How do I embed a Flash audio recorder in my site
i dont know about client side compressing (i have looked into it before and couldnt find anything). But i know you can severely reduce the size of the file by limiting the rate of recording via these numbers here, where if i recall correctly 16 is 16khz recording
recorder = new MicRecorder(wavencoder,null,50,16);
also sending to the server is not that hard, just look up how to post data, because the wav file is essentially binary data
You can compress the file on the clientside using libmp3lame.js: https://github.com/akrennmair/libmp3lame-js
There is already a gitHub project that uses this library to record audio and save it in MP3 format directly in the browser:
https://github.com/nusofthq/Recordmp3js

Categories