I'm trying to render HTML as a H.264 stream, and then streaming it to another PC on my network.
I've got the last part, streaming to to another PC on my network down.
Now my only problem is rendering the webpage.
I can't render it once, because it isn't a static webpage.
I need to load the webpage, fetch images, run javascript and open websockets.
The only way I can imagine this working, is if I run a browser (or maybe something like CEF?), and "capture" the output, and render it as H.264
I'm basically trying to do the same as OBS' BrowserSource, but the only reason I'm NOT using OBS, is because I can't find a good way to run it headless.
NOTE: I need to be able to do it through the commandline, completely headless.
I've done this with the Chromium Tab Capture API, and the Off-Screen Tab Capture API.
Chromium will conveniently handle all the rendering, including bringing in WebGL rendered stuff, and composite all together in a nice neat MediaStream for you. From there, you can use it in a WebRTC call or pass off to a MediaRecorder instance.
The off-screen tab capture even has a nice separate isolated process that cannot access local cameras and such.
https://developer.chrome.com/extensions/tabCapture
We are using Caspar CG to render a rotating news display for live TV broadcast. Extremely powerful open source tool built by the Swedish public service company SVT. A true Swiss army knife type of software, highly recommend:
https://github.com/CasparCG/server
Related
I'll try my best to explain this as good as I can:
I have programmed an art installation (interactive animation with three.js), it is running very smooth on my laptop, but not on older machines and almost on no mobile devices. Is there any way to run the javascript animation on a performant machine/server but display/render it on a website, thus running also on not performant devices. I was thinking about something like done for CloudGaming (as in shadow.tech) or services like parsec.app maybe...
Any ideas/hints?
I've thought about the same concept before too. The answer to your question rendering the three js on server and displaying it on an older machine will be something like this:
You are going to have to render three js on a server then basically live stream it to the client then capture the client's controls and send it back to the server to apply them but latency is going to be a pain in the ass so better write your code nice and neat and a lot of error handling. Come to think about it Google stadia does the same think too when you play games on youtube.
Server Architecture
So roughly thinking about it I would build my renderer backend like this:
First create a room with multiple instances of your three js code. => this basically means creating multiple (i.e index.js, index1.js, index2.js, etc...) Then Now decide if you wanna do this with the Straight Forward Recorded and Stream method or capture the stream directly from the canvas then broadcast it.
Recorded and Stream method
This means you create the js room then render and display it on the server machine then capture what is displaying on your screen using something like OBS studio then broadcast it to the client. I don't recommend this method.
OR
captureStream() method
After creating your room in your server run the three js code then access the canvas you are rendering the three js code in like this:
var canvas = document.querySelector('canvas');
var video = document.querySelector('video');
// Optional frames per second argument.
var stream = canvas.captureStream(60);
// Set the source of the <video> element to be the stream from the <canvas>.
video.srcObject = stream;
Then use something like webrtc and broadcast it to the client.
This is an example site you can look at: https://webrtc.github.io/samples/src/content/capture/canvas-pc/
Honestly don't ditch the client interaction idea. Instead write code to detect latency before hand then decide to enable it for that specific client
Hope this helps
There are some SaaS tools [1, 2] that give you a plugin to run on your site, so that you can view how your users are interacting with your website remotely.
I'm guessing this works by streaming DOM updates back to a remote server, but I'm not sure of that. I'm really interested in how this technology works, and whether or not there are tools out there to do similar tasks.
Here's the question: How do they do it? How can we reliably "co-browse" through the use of an installed Javascript snippet? I know of some solutions using WebRTC, but the browser support doesn't seem to be there yet
This is known as session replay.
I'm guessing this works by streaming DOM updates back to a remote server
No, it probably doesn't care about DOM updates. The script would capture every single input event, including key presses, mouse moves, mouse clicks, scroll events etc. Those are what UX designers are usually care about when evaluating their page design. They also might capture the initial state of the DOM.
If those plugins are just for data acquisition (like in A/B tests), I don't think the plugin scripts do actually live-stream those events. It probably captures them, stores them in some compressed data structure, and sends it to the service provider when the user leaves the page or in regular intervals.
Live streaming would certainly be possible, and it seems to be that this is what that co-browsing plugin does. (There's apparently also a back channel - a huge security risk! - to trigger mouse clicks etc remotely). WebRTC (which also could feed the complete video) might one approach, but a web socket would be enough.
Some documentation on how togetherjs in particular does it can be found at https://togetherjs.com/docs/#technology-overview.
I currently need to extract snapshot(s) from an IP Camera using RTSP on a web page.
VLC Web Plugin works well with playing stream, but before I get my hands dirty on playing with its Javascript API, can some one tell me whether the API can help me to take the snapshot(s) of the stream, like the way it done with VLC Media Player, cuz it does not present on the above page.
If the answer is 'No', please give me some other way to do this.
Thanks in advance.
Dang Loi.
The VLC plugin only provides metadata properties accessible from JavaScript.
For this reason there is no way to access the bitmap/video itself as plugins runs sand-boxed in the browser. The only way to obtain such data would be if the plugin itself provided a mechanism for it.
The only way to grab a frame is therefor to use a generic screen snagger (such as SnagIt), of course, without the ability to control it from JavaScript.
You could, as an option, look into the HTML5 Video element to see if you can use your video source with that. In that case you could grab frames, draw them to canvas and from there save it as an image.
Another option in case the original stream format isn't supported, is to transcode it on the fly to a format supported by the browser. Here is one such transcoder.
I want to design a small (at least, the very basics for now) IDE to make websites and applications with HTML, CSS, Javascript and a full LAMP stack.
One of the things I would like to do is to be able to open the preview window in a different browser tab (instead of having it in an HTML-encoded tab (a <div> element or similar), like Dreamweaver does) or even, while in source mode, being able to display the toolset in another browser tab and displace this tab onto a second display (or even a third one, although I have just one display, but it's for illustrating my situation).
Once deployed in another browser tab, I want for every browser tab to reflect any changes done to any of the other browser tabs.
For example, if I have the source view in one display and open the properties grid in another and I change a color input value, I want the output view to reflect this change in color for whichever component gets the update.
I have my mind quite clear on how to approach the data structures, how to do the preview, data caching, storing the project data, etc.
What I don't have that clear is how to effectively communicate the tabs between them. One idea is to use a combination of AJAX requests and server-sent events (SSEs) to communicate but, even if that could work, looks like crude to me.
I was thinking on something like websockets with message passing. I could encode any changes in a given component, send them to the server and have it route to the appropiate listeners for each one of them to reflect any change locally.
I have very litte experience with websockets, though... so I'm in doubt. Can you give me a hint on what could be the most efficient method here?
I've experimented recently with WebSockets, in combination Java on Server and JavaScript on client side. First wanted to go with simple Java SE, had a nightmare, didn't succeed to make it work and at end the found this tutorial:
https://blog.idrsolutions.com/2013/12/websockets-an-introduction/
Works like charm. Just stick with environment mentioned there - Java EE, GlassFish, NetBeans... Not saying that NetBeans is best IDE or something...
Of course, there is NodeJS option for back-end if you preffer JavaScript.
But generally speaking WebSockets works...and works well. :)
I am trying to find a way how to build a browser web app that runs in 2 monitors, like;
i have a secondary monitor that i want to put there some window, i want it fullscreen, and automatically in the secondary, so no drags, while the main app should stay in the primary monitor, where is the browser... no way seems, nothing really works so the only way seems to be with some desktop app.
I don't really care if the solution is browser dependent at this point, but still can't find a real solution.
Does a ny body tried something like this and can give me some ideas how to build it?
EDIT
... i need the second monitor to have some specific content, so not a clone of the primary...
kind of... i'm playing some game in the first monitor and i see statistics on the second...
What you are trying to do is not possible yet, but there is a Presentation API that is being discussed that would let you do exactly what you are looking for:
This specification defines an API to enable web content to access external presentation-type displays and use them for presenting web content.
Unfortunately, it seems like there are no browser implementations yet.
Your only other option right now is to use 2 independent browser pages that communicate with each other somehow (LocalStorage, WebSockets etc.).