Use Node.js as WebRTC peer - Decode frames on server - javascript

I would like to:
Setup Node.js as a WebRTC peer (ex. that web browser can connect to)
Decode video frames in real-time on server side (ex. streamed from browser's webcam)
What is the easiest way to accomplish this? I have seen many similar questions, but have not encountered any obvious answers.
Is this possible with just Node, or must one use a gateway such as Janus as well?
Thanks!

If you require realtime video: implementing DTLS, SRTP and codec handling is not trivial.
If you don't require realtime, you might want to give the MediaStreamRecorder API a try and send the data from the ondataavailable event via websocket to your node server.
Or capture from a canvas which is shown here and send that to the server as a jpg image.

Finally, the answer was to run a Janus server alongside Node. A custom plugin was written for Janus to handle the WebRTC frames and pass them to my node server as needed.

Related

How can I have a server stream a video with WebRTC?

My current use case is that I'm trying to mock a system that uses WebRTC for live video streaming (for a robot). This way, I don't have to be connected to the robot to develop the client.
My issue as of now is that I have no idea how to stream a video using WebRTC to connected peers. I've seen many examples of how to do this from client to client using a signaling server, but other than directly sending the video buffer using socket.io, I haven't seen an example of server -> client WebRTC streaming.
I'm planning to use Node.JS for mocking the video stream as I've been using it for the rest of the robot's systems.
It really isn't that different though client to client or server to client. You want to stream/broadcast a video to all the connected peers. Think of your server will be a client in the setup.
You can also use a WebRTC solution like Janus Repo it is a simple gateway and completely open source. Refer to - WebRTC & Dev API's for more info.
If you find latency issues after peers have increased in number you can check - Mesh, Routing, Multi peer architecture for some solutions for it.
Hope it helps.

How can I mix incoming WebRTC audio streams into a single stream on a server?

I'm working on a project which involves building an audio-conferencing app for the web. Currently my working system uses a WebSocket server to negotiate connections between peers, which can then stream audio directly to one another. However, I wish to implement the server as its own client/peer, which will receive all incoming audio streams, "mix" them into a single source/stream, and then stream it to all peers individually. The goal is to avoid direct peer-to-peer connections between user connections.
Perhaps a more simple question would be how I can accomplish the concept of the given figure, the green squares being RTCPeerConnections, and the server "forwarding" the incoming streams to the recipient?
Figure
How can I accomplish this, and is the concept feasible in regards to system resources of the server?
Thanks.
You can use kurento. Its based on webRTC and its Media Server features include group communications, transcoding, recording, mixing, broadcasting and routing of audiovisual flows.
The concept you are looking for is called Multi Conferencing Unit (MCU). MCU is not part of standard WebRTC. WebRTC is peer-to-peer only.
There are several media server solution that offer the MCU functionality. kurento, as suggested by Milad, is one options. Others examples are Jitsi Videobridge or Janus.
A more recent approach you might want to consider is SFU (Selective Forwarding Unit).

Buffering/Syncing remote (webRTC) mediaStreams

I am currently building a webRTC application that streams audio (the classic server, client, one to many model). Communication and signaling is done through sockets.
The problem I have found is that there is a lot of variability when streaming to smart devices (mainly due to varying processing power), even on a local network.
Hence, I am trying to add functionality that syncs the stream between devices. At a high level I was thinking of potentially, buffering the incoming stream, once all devices are connected the last peer to connect will share something that indicates where that specific peer's buffer starts and all peers will play the buffer from that position.
Does this sound possible? Is there a better way to sync up remote streams? If I was to go along this path, how would I go about buffering a remote MediaStream object (or data from a BlobURL) potentially into some form of array which can be used to identify a common starting location between the streams?
Would I potentially use the Javascript AudioContext api?
I have also looked at NTP protocols and other syncing mechanism but I couldn't find how to apply them to in the context of a webRTC application.
Any help, pointers, or direction would be greatly appreciated.

WebRTC - help me understand a few concepts

I'm new to WebRTC, actually just heard about it a few days ago and I've read a lot about it. However, I still have a few questions.
What do I need to explore the usage of WebRTC? E.g.: do I need a server, any libraries etc.? I'm aware that new version of Chrome and Firefox support WebRTC, but besides these two browsers, is there anything else that is necessary?
What is the main purpose of WebRTC when addressing practical usage? To video chat? Audio chat? What about text-chatting?
Does WebRTC need a server for any kind of browser-to-browser interaction? I've seen some libraries, such as PeerJS that don't explicitly mention any kind of server... so is it possible to connect two clients directly? There's also a PeerServer, which supposedly helps broker connections between PeerJS clients. Can I use WebRTC without such a server?
What are the most commonly used libraries for WebRTC?
What's a good starting point for someone who's totally new in WebRTC? I'd like to setup a basic google-talk kind of service, to chat with one person.
Thank you so much guys.
You can find many docs here E.g. this one, this one and this one!
You can find a few libraries here.
A simple multi-user WebRTC app needs following things:
Signalling server to exchange sdp/ice/etc. ---- e.g. socket.io/websockets/xmpp/sip/XHR/etc.
ICE server i.e. STUN and/or TURN; to make sure Firewalls doesn't block UDP/TCP ports
JavaScript app to access/invoke RTCWeb JavaScript API i.e. RTCPeerConnection.
It just takes a few minutes to setup WebRTC peer-to-peer connection. You can setup peer-to-server connections as well where media-servers can be used to transcode/record/merge streams; or to relay to PSTN networks.
WebRTC DataChannels can be used for gaming, webpage synchronizing; fetching static contents, peer-to-peer or peer-to-server data transmission, etc.
What do I need to explore the usage of WebRTC? E.g.: do I need a
server, any libraries etc.? I'm aware that new version of Chrome and
Firefox support WebRTC, but besides these two browsers, is there
anything else that is necessary?
WebRTC it is JavaScript API for web developers which can be used for audio and video streaming.
But there are 2 notices:
You need a signaling path.
For example, if your first user is Alice using Firefox and second user is Bob using Chrome,
they should negotiate used codecs and streams.
WebRTC does not offer the signalling implementation. So you need to implement the signaling yourself. It is quite simple. You need to send SDP(stream config) to participant and receive an SDP answer. You can use plain HTTP via apahe server or use Websockets or any other transport to negotiate SDP.
So, it seems you need an intermediary signaling server workning with websockets or HTTP/HTTPS.
Once you negotiated the streams you are sending your audio or video stream, but the distanation user might have a simmetric NAT. It means that you stream will not be delivered to the target user. In such situation you need a TURN server to traverse the NAT.
Finally you will need 2 server-side logic items:
1) Signaling server
2) TURN or proxy server
To start, take a look Web Call Server.
The server implements HTML5 Websocket signaling and SRTP proxying as a TURN server.
You can also learn the webrtc application open source code.
First steps:
1. Download the signaling and streaming server.
2. Download and unzip web client.
3. Start the web client and debug javascript code to learn more how webrtc works.

RTP RTSP implementation in javascript

I have a client program and a server program. The server is on my localhost and it has my .mpeg video.
Using node JS I am supposed to stream a video from a server. The client requests messages, such as play/pause/resume/rewind etc. so I guess I have to use RTSP, to figure out what to send over the RTP. But I don't know from where to start.
All I have so far is the RegEx to filter the message, for example on the client there are buttons like play/pause/setup etc. so I can grab that text. And I have a switch.
But if I get setup what I should I do?
P.S I am not allowed to use RTSP modules or RTP modules. Gotta do it all from scratch.
When streaming mpeg file over the wire you will have to tackle RTSP and RTP separately. RTSP is used for signaling, session establishing and starting the underlying RTP stream.If you need to do this in node.js, I recommend loading a library that already implements RTSP/RTP(creating your own is quite a undertaking, but it is doable as well).
Some info on loading c++ libraries in node.js that: How can I use a C++ library from node.js?
So basically, from you mpeg file, you need to extract raw h264 stream. For this I recommend ffmpeg or some other libraries/code that understands mpeg file structure. Then you need to packetize the encoded frames inside of RTP packets; which you will then send back to the client from the server. The client will then depacketize the encoded frames into actual frame; and then decoded/display them on the screen .
I recommend reading http://www.ietf.org/rfc/rfc3984.txt for info on standard way to packetize H264 video.
This is all very general approach, but it gives you a general idea.
Hopefully this info helps, good luck.

Categories