Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Recently I have come across datachannel. I would like to integrate two infrastructure to enable webrtc interoperability.
Lync server will provide signalling and presence to help locate lync client and other peer is connected to a IMS application server. The other peer is not a lync client and its a webrtc enabled browser. how will I transfer data by integrating this two infrastructure(Lync and IMS)?
Would be great to have some information on which application layer protocols can be used for transferring datastreams thru the peers that are interoperable?
Unless you're prepared to do a lot of low-level coding and/or high-level hair-pulling, my suspicion is that WebRTC isn't quite ready for a scenario like this one quite yet. There are some folks who have managed to get it working with some servers like Asterisk, and there's supposedly a general purpose SIP client available here: https://code.google.com/p/sipml5/. But from what I hear hanging out on the WebRTC mailing list, folks are having a fair bit of trouble with these integration scenarios. There's certainly nothing that just works out-of-the-box. Lync supports SDP and SIP, but I expect that you'd need to spend a lot of time figuring out how you need to transform the SDP that WebRTC generates before it ever gets to the Lync server.
[Edit 1/28/2013] - Beyond the issues above, the real problem maybe with the codecs that are supported by each platform. Currently, I believe that the only codec supported by WebRTC is Google's VP8, which doesn't appear to be supported natively by Lync. So you'd need a realtime gateway/transcoder sitting between them, translating between H.264 (or whatever protocol Microsoft Lync settles on) and VP8. Assuming you can find a gateway to do that - they may very well exist - I can't imagine it's going to scale very well.
Just to complement Ken Smith's answer, check out MCU Media Server by Medooze.
They claim to have transcoding and conferencing and they support WebRTC.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I recently came up with a fun (but difficult) side-project idea for a browser instance that can be shared across multiple people. Essentially, you share the same browser session with you and your buddies, and everyone can see and perform the same actions that you would normally be able to do alone. In this case however, the changes affect everyone (Ex. closing a tab for you closes the tab for everyone, everyone watches the same YT video).
All in all, the process will go something like this.
I open the browser (session)
I send a unique link provided by the browser session to my friend
My friend opens the link and is led to the same browser session as me
We both have a real-time view of each other's browser and can perform various actions that reflect to all party members
The specific session ends when all the users leave the session (or something like that)
Remind you of something? Hint: Zoom
Being a web developer, something like this seems like a project that would involve mainly backend work which I not too familiar with. Chromium seems like a good open-source option to code with for the actual browser but the session sharing features seem a bit daunting. I could create a basic browser from scratch or make it into a chrome extension like Netflix Party but there obviously has to be a backend somewhere, somehow.
Would love to hear some opinions from you guys. Thank you!
Very interesting side project to improve your skills. I think I would go for WebRTC. WebRTC provides the API and protocols for P2P connections. The users can send and receive events realtime through browsers.
If you want to control the party with a central server, you might want to use WebSocket.
With a simple toy project on the textbook I tend to lose the motivation to complete learning the whole technology. But your project seems to be quite challenging. It will be fun to learn all these technologies for sure! Happy coding!
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
With the release of Chrome 77 the Web Serial API became available with the use of an experimental flag. This is particularly useful for desktop applications running in NW.js or Electron, where NodeJS has previously (and still does to a large degree) provided a bridge between web and native.
I find myself very much wanting to abandon the use of NPM packages like serialport, which extend both NW.js and Electron to provide serial port access.
While Electron 8.0.1 does make available navigator.serial, it's not exactly clear how much of the API is actually implemented. To further complicate things, there is no good documentation for the API (at least in my search) besides https://wicg.github.io/serial/ and https://github.com/WICG/serial/blob/gh-pages/EXPLAINER.md. I've tried tinkering with it on my own, but it's not clear whether I'm using it incorrectly, or whether parts simply aren't implemented.
So what is the status of this API? Which parts are reliably implemented (in Chromium), and is there any indication of when this will be ready for prime time? I think a lot of people are wondering this as it opens quite a few doors for interaction with the user's PC.
Here are some resources for tracking the status of the Serial API and its implementation in Chromium,
Draft Specification, as you've pointed out it is incomplete and I'm working on fixing that.
Specification "explainer", this is a less formal introduction to the specification and a more up-to-date reference for the current design of the API.
Chrome Platform Status entry, this tracks the official implementation status in Chrome.
Chromium implementation tracking issue, star this issue for updates as the implementation work progresses.
Polyfill library, this library uses WebUSB to implement the API for standard USB CDC class devices. Think of this at the moment as a prototype of what the API could look like when implemented in a browser.
Code lab, if you're looking for a larger example of how to use the API this code lab explains how to get started communicating with a particular device in a step-by-step fashion.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
What is the best Open Source library which has these features?
Peer to Peer communication
Instant message
Audio call
Video call
Server SDK - Desirable Node Js server, but maybe in another language
JavaScript SDK
IOS SDK
Android SDK
I know one library (EasyRtc) which has above features exclude IOS and Android SDK. IOS and Android SDK is not open source. For it must be paid.
And QuickBlox also not fully open source. Must paid for server SDK, but other SDK is free.
AND etc. I want use fully open source
There is no single answer to this, as any response will be opinionated.
WebRTC is supported by the browsers except for IOS and Safari, as Apple seem to have a problem with anything that's peer to peer, although they are rumoured to be working on WebRTC support.
Have a look at https://webrtc.org/ for code samples, tutorials and discussions on how things work.
For Android you should use crosswalk, as that will give you modern chrome capabilities. For IOS there is a project called iosrtc. https://github.com/eface2face/cordova-plugin-iosrtc - this is not completely plain sailing, but with some perseverance it can be made to work.
You will also need a signalling server of some kind. PeerJS is at http://peerjs.com/ and is open source using a nodejs backend. There are other signalling servers, depending on your needs.
Be warned that while WebRTC has been around some time already, it is far from a simple drop in. You will need to do some homework to get the answer you are seeking.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Alright so I have a C++ game server which I wanna connect to via UDP and Javascript, but I'm not sure how to do that.
WebSockets don't seem to work as they only supports TCP and
WebRTC doesn't seem to work either for this kind undertaking (at least from what I've read).
I wouldn't mind using technologies that are in beta-stage and therefore not available on all platforms, as as long as they are available in Chrome (Canary).
You won't be able to use UDP directly. That's a fundamental property of the web sandbox. See Can I use WebRTC to open a UDP connection?
If you want to talk directly, you can with data channels, but that is going to need a bunch of things over and above UDP on the server side. ICE, DTLS and SCTP are needed, see https://datatracker.ietf.org/doc/html/draft-ietf-rtcweb-transports. The standards are still somewhat in flux there, so I'm not sure that you want to dive into that morass right away.
You can also built Javascript+Flash bridge and use Adobe Flash Player RTMFP protocol which based on UDP. If you need raw UDP or similar, you should better use WebRTC data channel.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
What is ORBX.js and how does it relate to the following points I thought were only available on a operating system (Not Web, but actually running in the system):
Watching a 1080p video without codecs
Playing a game (Like Left 4 Dead 2) on the Web
What characteristics does it offer for the web that we are used to seeing on a non web system (With this i mean running an app that is not web related, like a computer game)
How is this possible?
For a non-critical "this is super awesome" piece for those who have not yet heard of this technology, see this write-up: Today I saw the future (this is the same as mentioned by Jonathan in the comment above - and yes, its kind of a puff piece)
To digest it, this is the latest re-emergence of the concept of the thin-client. In short, the viewing device uses a web browser to launch a "stream" that is very much like a virtual/remote desktop. It specifically allows remote desktop applications itself, but it also allows something where the behind-the-scenes implementation is different; this is the "GPU cloud" referred to.
What makes this a thing is that it would effectively allow future computers/devices to target one stationary target - the ability to run a web browser and decode a video stream, and if they can do that then they can just as well any application or program imaginable, no matter how intensive it might be. This is because all the processing would be done in the cloud, with the only thing sent to the client being a video stream. In theory this could extend consumer hardware cycles and cut prices, as a 1ghz CPU would be just as good as a 100ghz.
In reality, the obstacles are bandwidth, latency, connectivity, and of course a successful implementation of this technology with wide-spread marketplace compatibility. Then, of course, you start having to pay for cloud computing as a cost of using your cloud-based applications. And without a fast internet connection, you are back to using native downloaded apps only.
For now there is no open encoder, which is a problem for developers and programmers. Only time will tell if it is a viable technology.