Is it possible to integrate opencv.js with Google Meet, Zoom, or Twilio? The goal is to develop a website that is compatible with the previously stated video calling applications and detects a few faces in real-time without requiring the user to install an extension. The main challenge is that the camera is allocated to either opencv codebase that we are creating or the video calling service.
The approach to use virtual camera requires me to setup a driver in the system and is not something suitable for a scaled up application.
Yes you can get video streams using any one below solutions
Zoom Meeting RawData Bot
Custom Live Streaming
My Bot Solution
For more please watch here.
Related
I'm trying to render HTML as a H.264 stream, and then streaming it to another PC on my network.
I've got the last part, streaming to to another PC on my network down.
Now my only problem is rendering the webpage.
I can't render it once, because it isn't a static webpage.
I need to load the webpage, fetch images, run javascript and open websockets.
The only way I can imagine this working, is if I run a browser (or maybe something like CEF?), and "capture" the output, and render it as H.264
I'm basically trying to do the same as OBS' BrowserSource, but the only reason I'm NOT using OBS, is because I can't find a good way to run it headless.
NOTE: I need to be able to do it through the commandline, completely headless.
I've done this with the Chromium Tab Capture API, and the Off-Screen Tab Capture API.
Chromium will conveniently handle all the rendering, including bringing in WebGL rendered stuff, and composite all together in a nice neat MediaStream for you. From there, you can use it in a WebRTC call or pass off to a MediaRecorder instance.
The off-screen tab capture even has a nice separate isolated process that cannot access local cameras and such.
https://developer.chrome.com/extensions/tabCapture
We are using Caspar CG to render a rotating news display for live TV broadcast. Extremely powerful open source tool built by the Swedish public service company SVT. A true Swiss army knife type of software, highly recommend:
https://github.com/CasparCG/server
I am developed the RTC application, now we have added additional feature of VR,
How to integrate with real time scenarios
I Found three.js , take photos from RTC and use that images by three js library that's orgin VR
Actually I'm making a website where people can customize their jewellery online, I am making this with the help of ThreeJS and I want to make an Android app too, so I must make API only APP for my website and Android App So, please tell me, how can I make API only app with ThreeJS like my website, to consume in website and android APP and iOS app.
Please suggest me.
Three.js is written in javascript so it cannot easily be integrated into a native app. Performance-wise it would probably be best to reimplement the rendering in openGL-ES (which webgl is also based on).
If you want to stay with your three.js implementation you will only have the choice to run your code in a browser-environment (because webgl and javascript) by using a WebView that runs the javascript and webgl-code (quick googling turned up this, which looks promising: https://blog.ludei.com/webgl-ios-8-safari-webview/).
There might even at one point turn up a proper react-native implementation of webgl or even three.js...
I've implemented Wikitude ARView and my client having some issues after loading POIs around his location (wrong direction and altitudes for markers). So, I tried hard coding his current location and passed POIs around his location to Wikitude ARView. But it didn't work as it didn't display any markers on the ARView.
1.) I just need to know is this possible in Wikitude ARView? and what is the mechanism exactly Wikitude using while drawing markers on the view? (i.e. Do they internally track user location? and etc)
2.) If Yes, Please I need some guidance of documentation (if any) or anything to sort this out.
1) On iOS, the Wikitude SDK uses the CLLocationManager class to get the current location. On Android, you have to impl. your own location handling (sample code is provided with the SDK Example application).
Both native SDKs however allow you to inject a custom location which is then used as reference for the Architect World.
The SDK also provides a RelativeLocation (JS) which may also be helpful for your impl.
2) As mentioned before, have a look at the SDK Example Application that is provided with the SDK. The SDK package also contains a documentations which covers the native SDK as well as the JavaScript API.
I am writing a mobile app using open web technologies, primarily targeting the newly-emerging Firefox OS, but planning to support any mobile device with a web browser. The app concerns means of public transport, currently in my city, but with an ability to extend to other areas as well. I want to provide users graphical info on where are the stations for public transport lines, to provide shortest routing information from station A to station B, track vehicle positions using the city's public API and so forth.
Since it is a Firefox OS app, I'm using HTML5/CSS3 for presentation and Javascript for the logic, and keep these files local, thereby never requiring Internet access for the app to work. However, the problem I am facing is rendering city maps (with possible overlays on top, for example highlighted roads and stations). I want to keep the app not depending on an Internet connection to work since I suppose it is sometimes going to be used during transport and in public, where there is no possibility of a persistent WiFi connection and users have to rely on their carrier-provider data connection, which can prove costly and divert potential users.
So far I've been able to find only Kothic-JS (uses an HTML5 canvas) that can render OpenStreetMap data from offline files, but its performance worries me as it stutters in my Firefox OS simulator, even with plenty of computing power available on my desktop computer. I can only imagine what horror would it cause on low-end mobile devices equipped with FxOS—I fear the app would not be usable. Other libraries (such as OpenLayers) tend to all download server-generated tiles, as far as I was able to see.
Is rendering city maps on mobile devices using HTML5 and Javascript feasible? How should I approach this problem? The map files can be transformed to SVG using clever XSLTs and maybe pre-split into SVG tiles, clipping where necessary, but the size of these tiles can never be chosen the right way because of plenty of zoom levels that can be used (i.e. if tiles on zoom level 5 occupy the device screen, on zoom level 2 they occupy only small pieces of it and I end up drawing on the order of 30 tiles to fill the screen). Is there any other way to do this besides turning to online services? I am aware there are great libraries for client-side map rendering, but none of these can be used from within Javascript.
you should try Leaflet JS - it's a great open source mapping library for javascript, with handheld/touch support. I've used it in my project and it works great on Firefox OS.
As for the offline support you may find several blog posts about how to cache tiles for offline use with Leaflet - you should be bundling these tiles with your app (if it is only meant to display local information) or implementing some kind of caching algorithm if you can't bundle these assets.