I am designing an architecture for a web application using Node.js, and we need to be able to send medium size files to the client from a gallery. As a user browses the gallery, they will be sent these binary files as fast as possible(for each gallery item). The files could go up to 6Mb, but probably average around 2Mb.
My client is insisting that we should use websockets for data transfer instead of XHR. Just to be clear, we don't need bi-directional communication.
I lack the experience in this domain and need help in my reasoning.
My points so far are the following:
Using WebSockets breaks any client-side caching that would be provided by HTTP. Users would be forced to re-download content if they visited the same item in the gallery twice.
WebSocket messages cannot be handled by/routed to proxy caches. They must always be handled by an explicit server.
CDNs are built to provide extensive web caching, intercepting HTTP requests. WebSockets would limit us from leveraging CDNs.
I guess that Nodejs would be able to respond faster to hundreds/thousands of XHR than concurrent websocket connections.
Are there any technical arguments for/against using websockets for pure data transfer over standard HTTPRequests. Can anyone nullify/clarify my points and maybe provide links to help with my research?
I found this link very helpful: https://www.mnot.net/cache_docs/#PROXY
Off the top of my head, I can see the following technical arguments for XHR besides that it uses HTTP and is therefore better at caching (which is essential for speed):
HTTP is the dedicated protocol for file downloads. It's natively built into browsers (with the XHR interface), therefore better optimised and easier to use for the developer
HTTP already features a lot of the things you'd need to hand-craft with websockets, like file path requests, auth, sessions, caching… All both on the client and server side.
XHR has better support even in older browsers
some firewalls only allow HTTP(S) connections
There do not seem to be any technical reasons to prefer web sockets - the only thing that might affect your choice is "the client is king". You might be able to convince him though by telling him how much he has to pay you to reimplement the HTTP features on a websocket connection. It ain't cheap, especially when your application gets more complex.
Btw, I wouldn't support your last point. Node should be able to deal with just as many websocket connections as HTTP connections; if properly optimised all things are even. However, if your server architecture is not based solely on node, there is a multitude of plain file serving applications that are probably faster than node (not even counting the HTTP caching layer).
Related
Is it possible to implement a WebService over a WebRTC Data Channel ?
The idea is:
The client makes one https request to the server for signaling and session establishment
The client and the server start to communicate via a WebRTC DataChannel bidirectionally
Benefits?:
Performance ?
Requests goes over one connection and the standard allows for multiple datachannels over the same connection ( ports )
Flexible networking topologies
UDP
End to end encryption
The server can send events over the same connection
Load balancing could be implemented from a pool of servers client side without a load balancer , or all kinds of different solutions
Currently being debated the addition of DataChannels to Workers/Service Workers/ etc https://github.com/w3c/webrtc-extensions/issues/64
Drawbacks:
Application specific code for implementing request fragmentation and control over buffer limits
[EDIT 3] I don't know how much of a difference in terms of performance and cpu/memory usage will it be against HTTP/2 Stream
Ideas:
Clients could be read replicas of the data for sync, or any other applications that are suitable for orbit-db https://github.com/orbitdb/orbit-db in the public IPFS network, the benefit of using orbit-db is that only allows to the owner to make writes, then the server could additionally sign with his key all the data so that the clients could verify and trust it's from the server, that could offload the main server for reads, just an idea.
[EDIT]
I've found this repo: https://github.com/jsmouret/grpc-over-webrtc
amazing!
[EDIT2]
Changed Orbit-db idea and removed cluster IPFS after investigating a bit
[EDIT3]
After searching Fetch PROS for HTTP/2 i've found Fetch upload streaming with ReadableStreams, i don't know how much of a difference will it be to run GRPC (bidi) over a WebRTC DataChannel or a HTTP/2 Stream
https://www.chromestatus.com/feature/5274139738767360#:~:text=Fetch%20upload%20streaming%20lets%20web,things%20involved%20with%20network%20requests).
Very cool video explaining the feature: https://www.youtube.com/watch?v=G9PpImUEeUA
Lots of different points here, will try to address them all.
The idea is 100% feasible. Check out Pion WebRTC's data-channels example. All it takes a single request/response to establish a connection.
Performance
Data channels are a much better fit if you are doing latency sensitive work.
With data channels you can measure backpressure. You can tell how much data has been delivered, and how much has has been queued. If the queue is getting full you know you are sending too much data. Other APIs in the browser don't give you this. There are some future APIs (WebTransport) but they aren't available yet.
Data channels allow unordered/unreliable delivery. With TCP everything you send will be delivered and in order, this issue is known as head-of-line blocking. That means if you lose a packet all subsequent packets must be delayed. An example would be if you sent 0 1 2 3, if packet 1 hasn't arrived yet 2 and 3 can't be processed yet. Data channels can be configured to give you packets as soon as they arrive.
I can't give you specific numbers on the CPU/Memory costs of running DTLS+SCTP vs TLS+WebSocket server. It depends on hardware/network you have, what the workload is etc...
Multiplexing
You can serve multiple DataChannel streams over a single WebRTC Connection (PeerConnection). You can also serve multiple PeerConnections over a single port.
Network Transport
WebRTC can be run over UDP or TCP
Load Balancing
This is harder (but not intractable) moving DTLS and SCTP sessions between servers isn't easy with existing libraries. With pion/dtls it has the support to export/resume a session. I don't know support in other libraries however.
TLS/Websocket is much easier to load balance.
End to end encryption
WebRTC has mandatory encryption. This is nice over HTTP 1.1 which might accidentally fall back to non-TLS if configured incorrectly.
If you want to route a message through the server (and not have the server see it) I don't think what protocol you use matters.
Topologies
WebRTC can be run in many different topologies. You can do P2P or Client/Server, and lots of things in between. Depending on what you are building you could build a hybrid mesh. You could create a graph of connections, and deploy servers as needed. This flexibility lets you do some interesting things.
Hopefully addressed all your points! Happy to discuss further in the comments/will keep editing the question.
I was also wondering about this HTTP-over-WebRTC DataChannel idea a couple of years ago. The problem at hand was how to securely connect from a web app to an IoT device (raspberry pi) that sits behind a firewall.
Since there was no readily available solution, I ended up building a prototype. It did the job and has been in live deployment since 2019.
See this technical blog post that covers the design and implementation in more detail:
https://webrtchacks.com/private-home-surveillance-with-the-webrtc-datachannel/
High level architecture:
Simplified sequence diagram:
Recently began the process of extracting the code into a standalone repo.
https://github.com/ambianic/peerfetch
If your main use-case exchanges small content, you may have a look at CoAP RFC 7252. A peer may easily implement both roles, client and server, though the exchanged messages for request and response share the same fomat.
For some advanced usage of DTLS 1.2, DTLS Connection ID can do some magic for you.
If you don't stick to javascript and java is an option, you may check the open source project Eclipse/Californium. That's a CoAP/DTLS implementation, which comes with DTLS Connection ID and some prepared advanced examples as built-in-cid-load-balancer-support or DTLS-graceful-restart.
How does a site programmed using TCP (that is, someone on the site is connected to the server and exchanging information via TCP) scales compared to just serving information via AJAX? Say the information exchanged is the same.
Trying to clarify: I'm asking specifially about scale: I've read that keeping thousands of TCP connections is resources (which?) demanding, as compared to just serving information statically. I want to know if this correct.
WebSockets is a technology that allows the server to push notifications to the client. AJAX on the other hand is a pull technology meaning that the client is sending requests to the server.
So for example if you had an application which needed to receive notifications from the server at regular intervals and update its UI, WebSocket is more adapted and much better. With AJAX you will have to hammer your server with requests at regular intervals to see whether some state changed on the server. With WebSockets, it's the server that will notify the client for some event happening on the server. And this will happen in a single request.
So I guess it would really depend on the type of application you are developing but WebSockets and AJAX are two completely different technologies solving different kind of problems. Which one to choose would depend on your scenario.
Websockets are not a one-for-one with AJAX; they offer substantially different features. Websockets offers the ability to 'push' data to the client. AJAX works by 'pushing' data and returning a response.
The purpose of WebSockets is to provide a low-latency, bi-directional, full-duplex and long-running connection between a browser and server. WebSockets opens up possibilities with browser applications that were previously unavailable using HTTP or AJAX.
However, there is certainly an overlap in purpose between WebSockets and AJAX. For example, when the browser wants to be notified of server events (i.e. push) either AJAX or WebSockets are both viable options. If your application needs low-latency push events then this would be a factor in favor of WebSockets which would definitely scale better in this scenario. On the other hand, if you need to work with existing frameworks and deployed technologies (OAuth, RESTful API's, proxies, etc.) then AJAX is preferable.
If you don't need the specific benefits that WebSockets provides, then it's probably a better idea to stick with existing techniques like AJAX because this allows you to re-use and integrate with an existing ecosystem of tools, technologies, security mechanisms, knowledge bases that have been developed over the last 7 years.
But overall, Websockets will outperform AJAX by a significant factor.
I don't think there's any difference when it comes to scalability between WebSockets and standards TCP connnections. WebSocket is an upgrade from a static one way pipe into a duplex one. The physical resources are the exact same.
The main advantage of WebSockets is that they run over port 80, so it avoids most firewall problems, but you have to first connect over standard HTTP.
Here's a good page that clearly shows the benefits of the WebSocket API compared to Ajax long polling (especially on a large scale): http://www.websocket.org/quantum.html
It basically comes down to the fact that once the initial HTTP handshake is established, data can go back and forth much more quickly because the header overhead is greatly reduced (this is what most people refer to as bidirectional communication).
As an off note, if you only need to be able to push data from the server on a regular basis, but you don't need to make many client-initiated requests, then using HTML5 server-sent events with occasional Ajax requests from the client might be just what you need and much easier to implement then the WebSocket API.
In the web world a web browser makes a new request for every static file it has to retrieve, so; a stylesheet, javascript file, inline image - all initiate a new server request. Whilst my knowledge of the web is pretty good, the underlying technologies like websockets are somewhat new to me in how they work and what they are capable of.
My question is rather theoretical, but I am wondering if it's possible now or would ever be possible to serve static files via a websocket? Considering websockets are a persistent connection from the client (web browser) to the server, it makes sense that websockets could be used for serving some if not all static content as it would just be one connection as opposed to many.
To clarify a little bit.
I realise my wording about connections was incorrect as pointed out by Greg below. But from what I understand the reason CDN's were created and are still used today is to address the issue with browsers and or servers having a hard limit on the number of concurrent downloads, once you hit that limit your requests are then queued thus adding to the page load time. I am aware they were also created to provide cookie-less requests as well. So really my question should be: "Can websockets be used in place of a CDN?"
BrowserScope has some useful metrics, it appears as though the request limit is about 6 per hostname for most modern browsers and even IE8. But as I said sometimes people have more than 6 resources, does this mean they're being queued and slowing the page load time where websockets could potentially reduce this to one?
It's definitely possible but there are a few reasons why you probably don't want to use this for static resources:
You need at least one resource that is statically delivered over the standard HTTP mechanism which means you need something capable of serving static resources anyways. Generally you want to keep Javascript separate from your HTML which would mean another static load. Or you can be messy and put the WebSocket code embedded on the main page, but you still are really any better off yet.
You can't open WebSocket connections until a script on the page starts running. Establishing the WebSocket connection adds some initial latency.
Most browsers will load non-conflicting static resources in parallel (some older browsers have a severe limit on the number of parallel connections, but they still have some parallelization). You could open multiple WebSocket connections for different static resources, but doing this reliably and efficiently is going to take a lot of effort. Browsers have already solved most of these issues for static resources.
Each WebSocket connection is a guaranteed order message based transport. Combined with the serialized nature of Javascript execution this effectively means you get to process one WebSocket message at a time. You could use Web Workers to be able to process more than one WebSocket connection in parallel, but the main render script will still be serialized across those connections. You could certainly make this efficient, but once again, this isn't a trivial problem and browsers have already solved a lot of these static resource loading problems.
Many web servers support gziping resources before delivering them. WebSocket does not yet have compression support (it's being discussed as an extension in the working group). This means if you want to compress your resources over WebSocket you will have to do this in Javascript which will add more latency.
If you have parts of your page that are dynamically updated using static resources (e.g. loading in new images into a HTML5 canvas game), then WebSockets may be your best option because an already established WebSocket connection will have low latency and overhead for getting pushed updates from the server then getting these delivered over HTTP. But I wouldn't recommend using WebSockets for the initial static resources when you page first loads.
This answer does not really address your web sockets question, but it may make it obsolete:
The next generation technology that is supposed to solve the problem of transferring several assets over a single connection is SPDY, which is was a candidate for HTTP 2.0. It has working implementations in Chrome and Firefox and already some experimental server-side support by the likes of Google and Twitter.
Edit: SPDY protocol is now deprecated. You can however look into it for research purposes.
I am looking at facebook news feed/ticker right now and I am wondering what technology/architecture it uses to pull in data asynchronously when any of my connections make an update. One possibility that I can think of is a javascript setInterval on a function that aggressively polls the server for new data.
I wonder how efficient that is.
Another possible technology that I can think of is something like Comet/NodeJS architecture that pings the client when there is an update on the server. I am not too familiar with this technology.
If I wanted to create something similar to this. What should I be looking into? Is the first approach the preferred way to do this? What technologies are available out there that will allow me to do this?
There are several technologies to achieve this:
polling: the app makes a request every x milliseconds to check for updates
long polling: the app makes a request to the server, but the server only responds when it has new data available (usually if no new data is available in X seconds, an empty response is sent or the connection is killed)
forever frame: a hidden iframe is opened in the page and the request is made for a doc that relies on HTTP 1.1 chunked encoding
XHR streaming: allows successive messages to be sent from the server without requiring a new HTTP request after each response
WebSockets: this is the best option, it keeps the connection alive at all time
Flash WebSockets: if WS are not natively supported by the browser, then you can include a Flash script to enhance that functionality
Usually people use Flash WebSockets or long-polling when WebSockets (the most efficient transport) is not available in the browser.
A perfect example on how to combine many transport techniques and abstract them away is Socket.IO.
Additional resources:
http://en.wikipedia.org/wiki/Push_technology
http://en.wikipedia.org/wiki/Comet_(programming))
http://www.leggetter.co.uk/2011/08/25/what-came-before-websockets.html
Server polling with JavaScript
Is there a difference between long-polling and using Comet
http://techoctave.com/c7/posts/60-simple-long-polling-example-with-javascript-and-jquery
Video discussing different techniques: http://vimeo.com/27771528
The book Even Faster Websites has a full chapter (ch. 8) dedicated to 'Scaling with Comet'.
I could be wrong, but I think that Facebook relies on a "long polling" technique that keeps an http connection open to a server for a fixed amount of time. The data sent from the server triggers an event client side that is acted upon at that time. I would imagine that they use this technique to support the older browsers that do not have websocket support built in.
I, personally, have been working on an application with similar requirements and have opted to use a combination of node.js and socket.io. The socket.io module uses a variety of polling solutions and automatically chooses the best one based on what is available on the client.
Maybe you may have a look to Goliath (non-blocking IO server written in Ruby) : http://postrank-labs.github.com/goliath/
I have this code working in C#:
var request = (HttpWebRequest)WebRequest.Create("https://x.com/service");
request.Method = "GET";
// Add X509 certificate
var bytes = Convert.FromBase64String(certBase64);
var certificate = new X509Certificate2(bytes, password);
request.ClientCertificates.Add(certificate, "password"));
Is there any way to reproduce this request in Javascript? Third-party libraries would be fine for my purposes.
In browser-JavaScript there is no hope of doing anything like that.
The XMLHttpRequest interface gives you limited capabilities to customise the connection. You don't get any opportunity to influence SSL negotiation and you don't get the ability to make a request to a different domain than the one you're running on (for very good security reasons).
You could get around that and use a low-level socket, with complex libraries to implement HTTPS over the top of it, except that JS doesn't give you any access to low-level sockets either. Browser scripts just aren't expected or trusted to do that kind of thing: again, there are some serious security issues to worry about if any web page can make you send random connections to other servers (including ones on your private local network).
HTML5 gives you WebSocket, which can be used for low-level, low-latency connections, but it is deliberately incompatible with other services, to stop you attacking them. In general, anything you want a browser to talk to, whether that's via XMLHttpRequest, WebSocket or Flash Socket, will have to be deliberately set up to listen for browsers.
You may check out the opensource Forge project. It implements SSL/TLS in JavaScript, including the ability to specify client-side certificates.
http://github.com/digitalbazaar/forge/blob/master/README
This project makes use of the raw Socket interface made available via Flash, and implements the actual TLS layer in JavaScript. However, because Flash is used, the server you are contacting will need to provide a cross domain policy file. Included in the Forge project is an Apache module that server administrators can install to provide this policy file easily.
If you're looking for a solution that doesn't involve a server you have administrative privileges to -- then it does look like you're out of luck. Like the other poster, bobince, said, all of new approaches to communicating over raw sockets (or similar), via the web browser, require web servers to manually "opt-in".