I have a audio visualizer written in JS which draws on a <canvas> element.
Is it possible (without screen-capture) to turn that <canvas> into a (realtime) video stream? Perhaps somehow write it to a socket directly.
the JS uses THREE.js for rendering.
Preferrably I'd like to be able to run this on a webserver, it's probably not possible to do this without actually using a browser, but if it is, I'd be very happy to hear about it ;)
Using the info from Blindman67 I've managed to figure out a way of achieving the desired result.
I will end up using PhantomJS and have it write images to a /dev/stdout (or other socket) and use ffmpeg to turn that into a videostream. (sort of as described in this question)
I will also run a test using Whammy but as described in the github that might not produce the desired result; only 1 way to find out.
Edit: I will also try the suggestion from kaiido to use WebRTC
Related
I am working with React and Node. My project is having requirement to merge videos and play it on the player. Is possible anyhow, I can do it either on my React side using some canvas or on the back end side using some module other than ffmpeg?
Just want to show preview nothing else is it possible?
Any help would be much appriciated
Right now what I am doing is playing the videos one by one on the player
{vidoes.map((url) => <ReactPlayer src={url}/>)}
Now what I want to do is to draw the images on the canvas. So for that I am playing all the urls at once but how can I play them into series and save the next canvas images until the before one completes?
To achieve continous playing in browsers for multiple input video files there’s no need for server side processing. Static file serving for multiple video files (we’ll call them chunks from now on) is enough.
At first the .appendBytes() method for the playing buffer of a video player was invented in Flash to allow for switching video streams (or different quality chunks). It was also particularly useful for live video where the next video file doesn’t exist when the playing starts. It also allowed multiple resolution video chunks to play one after the other seamelessly, which, at the time, didn’t work in any other player, including VLC (and I think ffmpeg didn’t have it either).
HTML5 browsers have added an .appendBuffer() method to add video chunks to the currently playing video buffer.
It allows you to hands-on pre-load whatever content you need with full control of what gets played and when (you are in control of buffering and what comes next at all times).
You can start dumping video chunks into the playing buffer at any time.
On the server side ffmpeg cannot solve your problem in the same way the browser can as you will have to handle very difficult corner cases where the video resolution, codecs, audio channels, etc. might differ. Also, a browser only solution is vastly superior to doing remuxing or video encoding on the server.
MDN: https://developer.mozilla.org/en-US/docs/Web/API/SourceBuffer/appendBuffer
This will solve all your problems related to video playback of multiple source files.
If you want to do this on the backend, as stated in the comment, you will likely need to include ffmpg. There are some libraries though that make is simpler like fluent-ffmpeg
assuming you already have the files on your node server, you can use something like:
const ffmpeg = require('fluent-ffmpeg');
ffmpeg('/path/to/input1.avi')
.input('/path/to/input2.avi')
.on('error', function(err) {
console.log('An error occurred: ' + err.message);
})
.on('end', function() {
console.log('Merging finished !');
})
.mergeToFile('/path/to/merged.avi', '/path/to/tempDir');
note that this is a simple example taken directly from https://github.com/fluent-ffmpeg/node-fluent-ffmpeg#mergetofilefilename-tmpdir-concatenate-multiple-inputs.
Make sure you read through the prerequisites since it requires ffmpeg to be installed and configured.
There are other aspects that you might need to consider, such as differing codecs and resolutions. Hopefully this gives you a good starting point though.
Pretty sure most, if not all video manipulation libraries use ffmpeg under the hood.
Seeing as you're a react dev, you might appreciate Remotion to manipulate the videos as needed. It doesn't run in the frontend but does have some useful features.
I'll try my best to explain this as good as I can:
I have programmed an art installation (interactive animation with three.js), it is running very smooth on my laptop, but not on older machines and almost on no mobile devices. Is there any way to run the javascript animation on a performant machine/server but display/render it on a website, thus running also on not performant devices. I was thinking about something like done for CloudGaming (as in shadow.tech) or services like parsec.app maybe...
Any ideas/hints?
I've thought about the same concept before too. The answer to your question rendering the three js on server and displaying it on an older machine will be something like this:
You are going to have to render three js on a server then basically live stream it to the client then capture the client's controls and send it back to the server to apply them but latency is going to be a pain in the ass so better write your code nice and neat and a lot of error handling. Come to think about it Google stadia does the same think too when you play games on youtube.
Server Architecture
So roughly thinking about it I would build my renderer backend like this:
First create a room with multiple instances of your three js code. => this basically means creating multiple (i.e index.js, index1.js, index2.js, etc...) Then Now decide if you wanna do this with the Straight Forward Recorded and Stream method or capture the stream directly from the canvas then broadcast it.
Recorded and Stream method
This means you create the js room then render and display it on the server machine then capture what is displaying on your screen using something like OBS studio then broadcast it to the client. I don't recommend this method.
OR
captureStream() method
After creating your room in your server run the three js code then access the canvas you are rendering the three js code in like this:
var canvas = document.querySelector('canvas');
var video = document.querySelector('video');
// Optional frames per second argument.
var stream = canvas.captureStream(60);
// Set the source of the <video> element to be the stream from the <canvas>.
video.srcObject = stream;
Then use something like webrtc and broadcast it to the client.
This is an example site you can look at: https://webrtc.github.io/samples/src/content/capture/canvas-pc/
Honestly don't ditch the client interaction idea. Instead write code to detect latency before hand then decide to enable it for that specific client
Hope this helps
I have looked a bit around, but I couldn't find a proper library and everyone point at a different direction.
So my question is simple; is it possible to convert a media (webm) on a web page to mp4 and immediatly start the download ?
I'm making an extansion for firefox, but I have no idea how to do the conversion part.
I wouldn't recommend handling something as complicated as conversion by yourself, and instead would use something like https://cloudconvert.com/api as a background service, so your plugin will query that API upload/download, and output to the users browser.
I currently need to extract snapshot(s) from an IP Camera using RTSP on a web page.
VLC Web Plugin works well with playing stream, but before I get my hands dirty on playing with its Javascript API, can some one tell me whether the API can help me to take the snapshot(s) of the stream, like the way it done with VLC Media Player, cuz it does not present on the above page.
If the answer is 'No', please give me some other way to do this.
Thanks in advance.
Dang Loi.
The VLC plugin only provides metadata properties accessible from JavaScript.
For this reason there is no way to access the bitmap/video itself as plugins runs sand-boxed in the browser. The only way to obtain such data would be if the plugin itself provided a mechanism for it.
The only way to grab a frame is therefor to use a generic screen snagger (such as SnagIt), of course, without the ability to control it from JavaScript.
You could, as an option, look into the HTML5 Video element to see if you can use your video source with that. In that case you could grab frames, draw them to canvas and from there save it as an image.
Another option in case the original stream format isn't supported, is to transcode it on the fly to a format supported by the browser. Here is one such transcoder.
I have some small images that tiled together make a fullsize image. The tiles are saved on the server. I would like to stitch the tiles together in the right position and create 1 image file on disk made up of all the tile files. How can I do this in nodejs?
Thanks
Your best bet is probably to invoke a tool like ImageMagick, which has a montage command that does exactly what you're looking for.
This would be fairly straightforward to implement yourself, but I see that this fork of node-imagemagick has montage support.
Since node.js doesn't have a graphics editing suite, your best path would be to
You could call an external script, using java, using php, or the language you feel most comfortable hacking with.
There's plenty of material on how to run a script from node.js, so I won't mess around with that here.
However, I would suggest that you pass a temporary filename as an argument to the script, then when it finishes executing, go get that file rather than trying to read back the binary as a return value or something equally convoluted.