Determining transfer speed over websocket - javascript

I've done my obligatory Googling and I can't seem to find a suitable answer to this...
Is there a way to determine the upload and download rate/speed over a web socket connection using JavaScript? For instance, if I want to pull canvas data from a very large canvas (or use any other large chunk of data) and pass it to the server over a web socket connection in a single message, is there a way to determine how long it takes that message to actually send, though not necessarily have been received by the server?
Maybe breaking the data into smaller pieces and sending over a series of messages would work also, but still, I don't know of a way to determine when the message was actually sent. I'm aware that AJAX gives upload and download progress, but I'd prefer to use web sockets if possible.

Related

is it a good idea to split a buffer before sending through websockets?

I'm developing an online game fully in javascript(both server and client). Due to the fact that people can make custom maps/servers in my game, one of the options for the base server script to upload the map file to the client upon connection is through websockets, but I haven't found anywhere how to limit websockets speed so the server won't lag every time a new person connects and download its map.
One way to getting around this that I had thought was splitting the buffer where the map file is saved(I read it using fs.readFileSync then I make a buffer where it stays until requested) into many small buffers, then upload only one of them per second to the client, thus creating a "fake" upload speed limit for the server, theoretically avoiding lags and/or crashes.
My question is: is that a good idea? Would that work as intended?

How Can I Limit The Amount of Data Received In WebRTC Datachannel

I need some protection in a WebRTC APP, i need to stop clients receiving a large data packet, eg 2kb.
I need to cut of so to speak if someone sends me data larger than 2kb and delete the message. is there a setting somewhere i can limit the data received. or can i intercept the data while being downloaded, then stop the download part way.
Iv been searching around but could not find any information on this.
As per Mozilla foundation WebRTC site that describes Using WebRTC data channels there is not.
WebRTC data channels support buffering of outbound data. This is handled automatically. While there's no way to control the size of the buffer, you can learn how much data is currently buffered, and you can choose to be notified by an event when the buffer starts to run low on queued data. This makes it easy to write efficient routines that make sure there's always data ready to send without over-using memory or swamping the channel completely.
However, if your intention is to use the data received as a trigger, you may be able to use RTCDataChannel.bufferedAmount.
This returns the number of bytes of data currently queued to be sent over the data channel. More details here.
Get that value and build a logic to limit or stop download as per your needs.
Syntax
var amount = aDataChannel.bufferedAmount;

Chunk and pipe big amounts of data for client-site (browser) PDF generation

I'm trying to download HTML/JSON data from a webserver (Node.js) and convert it to PDF on the client-side. I wish to do the processing on the user's browser so my server don't get overloaded with pdfs conversions.
It shouldn't be a problem if the data wasn't so big. A report (the data downloaded from the server) can sum 200, 300MB and the browsers can't handle so much data in memory. Because of that, I (probably) need to download and save the data in chunks, or pipe it directly to the PDF converter.
But I can't get my head around it. How can I slice & store/pipe the downloaded data? I've been looking around and found several libraries, but I still didn't get how to make them work together. Any thoughts?
I do not think it's a good idea to have app consumers generating 800MB pdf files on their computers.
I would avoid JSON in the case of large records. If there is over 25 MB of actual records data, I would send that data in binary/compressed form.
As for viewing all this data, I don't even think PDF is the way to go. I would create a special offline viewer for the end consumer. Perhaps a custom browser plugin or extension so that they don't have to throw 800MB in memory when they're viewing a report.
Another consideration might be to use Google Drive or Rackspace OpenCloud or AWS or something of that nature, reason being, if something goes wrong on the consumer's end halfway through the transfer, your server will have to start all over too. If you throw it in the cloud behind a CDN, then they can download it however many times they need to, from a server that's close to them. Also your server should be able to send it to the cloud much faster than sending it to the client so less time that your server has resources tied open.

Is there any way to show transfer progress of base64 image string data sent from client to server?

My App involves heavy client side image manipulation. Since the files are modified so frequently during a users session we do not physically upload them to the server until the user is finished and chooses to save the image(s).
The images are read from the client using the html5 file API and stored in memory as base64 strings where the user proceeds to perform his/her manipulations quickly and efficiently.
When the user chooses to save we transfer the string or strings to the server via a standard ajax POST and then build the image into a physical file server-side.
What I would like to do is provide a progress bar for this final upload stage.
Now, I am well aware that the html5 spec includes support for file upload progress reporting. However we are not using the standard upload methods, instead, as mentioned above the images are sent as strings.
So my question really boils down to the following:
A) Is there anyway to actual measure the actual bytes sent during a simple post (PHP serverside). If so this would work as I already have the image size in bytes from the HTML5 filereader api.
B) If not is there anyway I can convert the base64 data into actual files on the fly and send them via the standard html5 upload method, reporting the progress as we go.
In either case... If so, how?
Many thanks in advance.
It's easier when you send the Base64 image via ajax. Ajax works with a callback function, so just use an indicator of some sort to let the user know that ajax has been initiated (i.e. the file download has started ). On completion the callback function will run, now you can turn off the indicator whatever it might be.
As far as measuring actual bytes, via ajax I'm not sure.
This method
showing progressbar progress with ajax request
suggest estimating it. I would take it a step further and maybe time how long it take the callback over say 10 tries( use this average to estimate your progress bar and also use the 90% trick mentioned above ).
Or if you want to encode/decode your Base64 text.
How can you encode a string to Base64 in JavaScript?
Depending on the server technology you are using, there are several options.
Use Comet or Javascript long polling to get the progress of the upload from the server, and update the progress bar from the client side
If you are using .Net technologies in the backend, explore the SignalR library, which will provide all the plumbing to provide real time communication between the server and the client.

Multiple websocket connections

Is there any advantages of having two distinct websocket connections to the same server from the same client? To me this seems a bad design choice, but is there any reason why/where it should work out better?
There are several reasons why you might want to do that but they probably aren't too common (at least not yet):
You have both encrypted and unencrypted data that you are sending/receiving (e.g. some of the data is bulky but not sensitive).
You have both streaming data and latency sensitive data: imagine an interactive game that occasionally has streamed video inside the game. You don't want large media streams to delay receipt of latency sensitive normal game messages.
You have both textual (e.g. JSON control messages) and binary data (typed arrays or blobs) and don't want to bother with adding your own protocol layer to distinguish since WebSockets already does this for you.
You have multiple WebSocket sub-protocols (the optional setting after the URI) that you support and the page wants to access more than one (each WebSocket connection is limited to a single sub-protocol).
You have several different WebSocket services sitting behind the same web server and port. The way the client chooses per connection might depend on URI path, URI scheme (ws or wss), sub-protocol, or perhaps even the first message from client to server.
I'm sure there are other reasons but that's all I can think of off the top of my head.
I found that it can make client logic much simpler when you are only subscribing to updates of certain objects being managed by the server. Instead of devising a custom subscription protocol for a single channel, you can just open a socket for each element.
Let's say you obtained a collection of elements via a REST API at
http://myserver/api/some-elements
You could subscribe to updates of a single element using a socket url like this:
ws://myserver/api/some-elements/42/updates
Of course one can argue that this doesn't scale for complex pages. However, for small and simple appications it might make your life a lot easier.

Categories