How Can I Limit The Amount of Data Received In WebRTC Datachannel - javascript

I need some protection in a WebRTC APP, i need to stop clients receiving a large data packet, eg 2kb.
I need to cut of so to speak if someone sends me data larger than 2kb and delete the message. is there a setting somewhere i can limit the data received. or can i intercept the data while being downloaded, then stop the download part way.
Iv been searching around but could not find any information on this.

As per Mozilla foundation WebRTC site that describes Using WebRTC data channels there is not.
WebRTC data channels support buffering of outbound data. This is handled automatically. While there's no way to control the size of the buffer, you can learn how much data is currently buffered, and you can choose to be notified by an event when the buffer starts to run low on queued data. This makes it easy to write efficient routines that make sure there's always data ready to send without over-using memory or swamping the channel completely.
However, if your intention is to use the data received as a trigger, you may be able to use RTCDataChannel.bufferedAmount.
This returns the number of bytes of data currently queued to be sent over the data channel. More details here.
Get that value and build a logic to limit or stop download as per your needs.
Syntax
var amount = aDataChannel.bufferedAmount;

Related

limit WebRTC bandwidth from the client side (browser)

I know it's possible to limit the upload (sent) bandwidth using "setParameter" on the peer-connection.
I'm looking for a way to limit the download (received) and couldn't find one.
(I don't have control
Am I missing the concept? or there's a way to do it?
Thanks
To "limit bandwidth" of realtime data means sending less of it.
There's no setParameters on an RTCRtpReceiver and no builtin back channel for this. But you can trivially make your own using e.g. createDataChannel("myBackchannel"), provided you have control of both sides. Then have the receiver send parameters back to the sender over it, which then sets them with setParameters.
This can be controlled by inserting a b= line into the SDP. Support for this varies, until recently Chrome only supported b=AS which specified the bandwidth in kilobits per second while Firefox only supported b=TIAS which specifies the bandwidth in bits per second.
Both variants ask a remote peer not to send more than this bandwidth.
One of the webrtc samples still shows the usage but you will need to deactivate the newer usage of setParameters.

What faster alternatives do I have to websockets for a real-time web application?

I'm planning to write a real time co-op multiplayer game. Currently I'm in the research phase. I've already written a turn-based game which used websockets and it was working fine.
I haven't tried writing a real time game using this technology however. My questions is about websockets. Is there an alternative way to handle communications between (browser) clients? My idea is to have the game state in each client and only send the deltas to the clients using the server as a mediator/synchronization tool.
My main concern is network speed. I want clients to be able to receive each other's actions as fast as possible so my game can stay real time. I have about 20-30 frames per second with less than a KByte of data per frame (which means a maximum of 20-30 KBytes of data per second per client).
I know that things like "network speed" depend on the connection but I'm interested in the "if all else equals" case.
From a standard browser, a webSocket is going to be your best bet. The only two alternatives are webSocket and Ajax. Both are TCP under the covers so once the connection is established they pretty much offer the same transport. But, a webSocket is a persistent connection so you save connection overhead everytime you want to send something. Plus the persistent connection of the webSocket allows you to send directly from server to client which you will want.
In a typical gaming design, the underlying gaming engine needs to adapt to the speed of the transport between the server and any given client. If the client has a slower connection, then you have to drop the number of packets you send to something that can keep up (perhaps fewer frame updates in your case). The connection speed is what it is so you have to make your app deliver the best experience you can at the speed that there is.
Some other things you can do to optimize the use of the transport:
Collect all data you need to send at one time and send it in one larger send operation rather than lots of small sends. In a webSocket, don't send three separate messages each with their own data. Instead, create one message that contains the info from all three messages.
Whenever possible, don't rely on the latency of the connection by sending, waiting for a response, sending again, waiting for response, etc... Instead, try to parallelize operations so you send, send, send and then process responses as they come in.
Tweak settings for outgoing packets from your server so their is no Nagle delay waiting to see if there is other data to go in the same packet. See Nagle's Algorithm. I don't think you have the ability in a browser to tweak this from the client.
Make sure your data is encoded as efficiently as possible for smallest packet size.

Determining transfer speed over websocket

I've done my obligatory Googling and I can't seem to find a suitable answer to this...
Is there a way to determine the upload and download rate/speed over a web socket connection using JavaScript? For instance, if I want to pull canvas data from a very large canvas (or use any other large chunk of data) and pass it to the server over a web socket connection in a single message, is there a way to determine how long it takes that message to actually send, though not necessarily have been received by the server?
Maybe breaking the data into smaller pieces and sending over a series of messages would work also, but still, I don't know of a way to determine when the message was actually sent. I'm aware that AJAX gives upload and download progress, but I'd prefer to use web sockets if possible.

Why make the server push data?

Why make the server push data to get notifications, like using SingleR while it can be made client side?
Using a javascript timing event, that checks for recent updates at specified time intervals user can get notifications as long as he remains connected to the server.
So my question is why do more work at the server that the client can already do?
It's not more work to the server, it's less work. Suppose that you have 10000 clients (and the number could easily be in the 100K or even millions for popular web-sites) polling the server every X seconds to find out if there's new data available for them. The server would have to handle 10000 requests every X seconds even if there's no new data to return to the clients. That's huge overhead.
When the server pushes updates to the clients, the server knows when an update is available and it can send it to just the clients this data is relevant to. This reduces the network traffic significantly.
In addition it makes the client code much simpler, but I think the server is the critical concern here.
First if you didn't use server push you will not get instant update for example you can't do chat application, second why bothering the client to do job that it is not designed to do it? third you will have performance issue on the client cause like #Ash said server is a lot more powerful than a client computer.

Chunking WebSocket Transmission

since I'm using WebSocket connections on more regular bases, I was interested in how things work under the hood. So I digged into the endless spec documents for a while, but so far I couldn't really find anything about chunking the transmission stream itself.
The WebSocket protocol calls it data frames (which describes the pure data stream, so its also called non-control frames). As far as I understood the spec, there is no defined max-length and no defined MTU (maximum transfer unit) value, that in turn means a single WebSocket data-frame may contain, by spec(!), an infinite amount of data (please correct me if I'm wrong here, I'm still a student on this).
After reading that, I instantly setup my little Node WebSocket server. Since I have a strong Ajax history (also on streaming and Comet), my expectations originaly were like, "there must be some kind of interactive mode for reading data while it is transfered". But I am wrong there, ain't I ?
I started out small, with 4kb of data.
server
testSocket.emit( 'data', new Array( 4096 ).join( 'X' ) );
and like expected this arrives on the client as one data-chunk
client
wsInstance.onmessage = function( data ) {
console.log( data.length ); // 4095
};
so I increased the payload and I actually was expecting again, that at some point, the client-side onmessage handler will fire repeatly, effectivley chunking the transmission. But to my shock, it never happened (node-server, tested on firefox, chrome and safari client-side). My biggest payload was 80 MB
testSocket.emit( 'data', new Array( 1024*1024*80 ).join( 'X' ) );
and it still arrived in one big data-chunk on the client. Of course, this takes a while even if you have a pretty good connection. Questions here are
is there any possiblity to chunk those streams, similar to the XHR readyState3 mode ?
is there any size limit for a single ws data-frame ?
are websockets not supposed to transfer such large payloads? (which would make me wonder again why there isn't a defined max-size)
I might still look from the wrong perspective on WebSockets, probably the need for sending large data-amounts is just not there and you should chunk/split any data logically yourself before sending ?
First, you need to differentiate between the WebSocket protocol and the WebSocket API within browsers.
The WebSocket protocol has a frame-size limit of 2^63 octets, but a WebSocket message can be composed of an unlimited number of frames.
The WebSocket API within browsers does not expose a frame-based or streaming API, but only a message-based API. The payload of an incoming message is always completely buffered up (within the browser's WebSocket implementation) before providing it to JavaScript.
APIs of other WebSocket implementations may provide frame- or streaming-based access to payload transferred via the WebSocket protocol. For example, AutobahnPython does. You can read more in the examples here https://github.com/tavendo/AutobahnPython/tree/master/examples/twisted/websocket/streaming.
Disclosure: I am original author of Autobahn and work for Tavendo.
More considerations:
As long as there is no frame/streaming API in browser JS WebSocket API, you can only receive/send complete WS messages.
A single (plain) WebSocket connection cannot interleave the payload of multiple messages. So i.e. if you use large messages, those are delivered in order, and you won't be able to send small messages in between while a big message is still on the fly.
There is an upcoming WebSocket extension (extensions are a builtin mechanism to extend the protocol): WebSocket multiplexing. This allows to have multiple (logical) WebSocket connections over a single underlying TCP connection, which has multiple advantages.
Note also: you can open multiple WS connections (over different underlying TCPs) to a single target server from a single JS / HTML page today.
Note also: you can do "chunking" yourself in application layer: send your stuff in smaller WS messages a reassemble yourself.
I agree, in an ideal world, you'd have message/frame/streaming API in browser plus WebSocket multiplexing. That would give all the power and convenience.
RFC 6455 Section 1.1:
This is what the WebSocket Protocol provides: [...] an alternative to HTTP polling for two-way communication from a web page to a remote server.
As stated, WebSockets are for commmunications between a web page and a server. Please note the difference between a web page and a web browser. Examples being used are browser games and chat applications, who excange many small messages.
If you want to send many MB's in one message, I think you're not using WebSockets the way they were intended. If you want to transfer files, then do so using a Plain Old Http Request, answered with Content-Disposition to let the browser download a file.
So if you explain why you want to send such large amounts of data, perhaps someone can help come up with a more elegant solution than using WebSockets.
Besides, a client or server may refuse too large messages (although it isn't explicitly stated how it'll refuse):
RFC 6455 Section 10.4:
Implementations that have implementation- and/or platform-specific
limitations regarding the frame size or total message size after
reassembly from multiple frames MUST protect themselves against
exceeding those limits. (For example, a malicious endpoint can try
to exhaust its peer's memory or mount a denial-of-service attack by
sending either a single big frame (e.g., of size 2**60) or by sending
a long stream of small frames that are a part of a fragmented
message.) Such an implementation SHOULD impose a limit on frame
sizes and the total message size after reassembly from multiple
frames.

Categories