To facilitate push notifications, is it absolutely necessary for a browser to host servers to accomplish this?
Each time I launch Firefox, prior to going to any websites, Firefox makes a connection to an Amazon AWS server that it maintains 24/7 (as long as the web browser it open).
Anyone who knows how to monitor their connections can verify what I'm saying. As soon as you launch Firefox, (prior to going to any websites) it will make and sustain a connection to an ip address like this one:
As long as Firefox remains open, you'll sustain a connection to an Amazon AWS server like the one above.
Is this really just the nature of push notifications? It is a technology that must maintain a 24/7 connection to your computer even if you haven't even navigated to a website?
So by design, the spec prescribed that a browser maker must provide a 24/7 service to facilitate this technology?
If this is the only way to facilitate this technology, then I'd be interested in running my own service to facilitate it. I don't feel comfortable with my web-browser maintaining a 24/7 connection to a server that's also controlled by the maker of the web-browser.
While the intention of this is probably divine, I consider it a slippery slope. A web browser that maintains a 24/7 connection with its creators, over the long run, I feel will evolve to communicate much more than just push notifications.
If you can put my concerns to rest. Please do.
My ultimate question is this:
If what I've described above, is simply the nature of the spec (as written), then is it possible to host you're own push notification service, so that your web browser uses your own service, instead of some 3rd party (like the maker of your web-browser)?
Related
There are millions of tweets and millions of active users in the Twitter. When a tweet gets like or retweet,how do they send live updates(websockets) of every tweet to its clients?
I think they wouldn't send live updates(websockets) of each tweet to every active user, that would result in (no of active tweets)X(no of active users)=(millions)X(millions)>10^12 live updates in each minute, each user would get millions of updates(of all the tweets) in each minute.
I think the live update of a particular tweet would only be received by the users who are watching that particular tweet.If this assumption is correct,then please tell me, how do they filter clients who are watching a particular tweet and send live updates of that tweet only to those filtered clients?
I was just watching a tweet in the Twitter, I was surprised to see live updates in likes and retweets of that tweet.I haven't seen any social media(like Instagram) giving live updates for every single post of it. I want to implement this method in my social media website.What I had concluded might or might not be correct, but I would request you to explain me, how does Twitter send live updates of every single tweet only to those particular users who are watching it.
To be clear, ONE device has ONE socket connection, to Twitter's cloud.
That ONE socket connection, receives ALL information from Twitter's cloud
new tweets
new likes
new retweets
everything else
all information comes on the ONE socket.
The cloud "figures out" what to send to who.
Is this what you were asking? Hope it clears it up.
The amazing thing is that twitter's cloud can connect to perhaps 100 ? million devices at the same time. (This is an amazing, major engineering achievement which requires an incredible amount of hardware, money and engineers.)
BTW if you're trying to implement something like this for an experiment or client. These days it is inconceivable you'd try to write the server side to achiever this, from scratch. Services exist, which do exactly this - example pusher.com, pubnub.com and so on.
(Indeed, these realtime infrastructure services, are, the basic technology of our era - everything runs on them.)
Here's a glance at the mind-boggling effort involved in Twitter's cloud: https://blog.twitter.com/engineering/en_us/topics/infrastructure/2017/the-infrastructure-behind-twitter-scale.html
Realtime communication or what you refer to as 'live updates' is all a play of various low-level networking protocols. Here's a bit of background on the protocols in general just so you know what you are working with:
A regular REST API uses the HTTP as the underlying protocol for communication, which follows the request and response paradigm, meaning the communication involves the client requesting some data or resource from a server, and the server responding back to that client. This is what you usually see in a regular website that isn't really live but shows or does something following a button click or similar trigger from the user.
However, HTTP is a stateless protocol, so every request-response cycle will end up having to repeat the header and metadata information. This incurs additional latency in case of frequently repeated request-response cycles.
With WebSockets, although the communication still starts off as an initial HTTP handshake, it is further upgrades to follow the WebSockets protocol (i.e. if both the server and the client are compliant with the protocol as not all entities support the WebSockets protocol).
Now with WebSockets, it is possible to establish a full-duplex and persistent connection between the client and a server. This means that unlike a request and a response, the connection stays open for as long as the application is running (i.e. it’s persistent), and since it is full-duplex, two-way simultaneous communication is possible. Now the server is capable of initiating communication and 'push' some data to the client when new data (that the client is interested in) becomes available.
The WebSockets protocol is stateful and allows you to implement the Publish-Subscribe (or Pub/Sub) messaging pattern which is the primary concept used in the real-time technologies where you are able to get new updates in the form of server push without the client having to request (refresh the page) repeatedly. Examples of such applications other than Twitter are Uber-like vehicle location tracking, Push Notifications, Stock market prices updating in real-time, chat, multiplayer games, live online collaboration tools, etc.
You can check out a deep dive article on WebSockets which explains the history of this protocol, how it came into being, what it’s used for and how you can implement it yourself.
Another interesting one is SSE or Server-Sent Events which is a subscribe-only version of WebSockets and restricted to the web platform. You can use SSE to receive real-time push updates from servers, but this would be unidirectional as you can only receive updates via SSE and not really publish anything. Here's a video where I explain this in much more detail: https://www.youtube.com/watch?v=Z4ni7GsiIbs
You can implement these various protocols as required from scratch or use a distributed messaging service like Ably which not only provides the messaging infrastructure of these protocols but also offers other add-ons such as scalability, reliability, message ordering, protocol interoperability, etc, out of the box, which is essential for a production-level app.
Full disclaimer: I'm a Dev Advocate for Ably but I hope the info in my answer is useful to you nevertheless.
WebRTC signalling is driving me crazy. My use-case is quite simple: a bidirectional audio intercom between a kiosk and to a control room webapp. Both computers are on the same network. Neither has internet access, all machines have known static IPs.
Everything I read wants me to use STUN/TURN/ICE servers. The acronyms for this is endless, contributing to my migraine but if this were a standard application, I'd just open a port, tell the other client about it (I can do this via the webapp if I need to) and have the other connect.
Can I do this with WebRTC? Without running a dozen signalling servers?
For the sake of examples, how would you connect a browser running on 192.168.0.101 to one running on 192.168.0.102?
STUN/TURN is different from signaling.
STUN/TURN in WebRTC are used to gather ICE candidates. Signaling is used to transmit between these two PCs the session description (offer and answer).
You can use free STUN server (like stun.l.google.com or stun.services.mozilla.org). There are also free TURN servers, but not too many (these are resource expensive). One is numb.vigenie.ca.
Now there's no signaling server, because these are custom and can be done in many ways. Here's an article that I wrote. I ended up using Stomp now on client side and Spring on server side.
I guess you can tamper with SDP and inject the ICE candidates statically, but you'll still need to exchange SDP (and that's dinamycally generated each session) between these two PCs somehow. Even though, taking into account that the configuration will not change, I guess you can exchange it once (through the means of copy-paste :) ), stored it somewhere and use it every time.
If your end-points have static IPs then you can ignore STUN, TURN and ICE, which are just power-tools to drill holes in firewalls. Most people aren't that lucky.
Due to how WebRTC is structured, end-points do need a way to exchange call setup information (SDP) like media ports and key information ahead of time. How you get that information from A to B and back to A, is entirely up to you ("signaling server" is just a fancy word for this), but most people use something like a web socket server, the tic-tac-toe of client-initiated communication.
I think the simplest way to make this work on a private network without an internet connection is to install a basic web socket server on one of the machines.
As an example I recommend the very simple https://github.com/emannion/webrtc-web-socket which worked on my private network without an internet connection.
Follow the instructions to install the web socket server on e.g. 192.168.1.101, then have both end-points connect to 192.168.0.101:1337 with Chrome or Firefox. Share camera on both ends in the basic demo web UI, and hit Connect and you should be good to go.
If you need to do this entirely without any server, then this answer to a related question at least highlights the information you'd need to send across (in a cut'n'paste demo).
The question is fairly self explanatory. I want to auto-detect my server software within a local network from a webpage. I'm able to send and receive broadcasts with node, but for this to work I need to be able to send or receive broadcasts with in-browser javascript, and then connect directly to my server.
Does anyone know how to do this? Is there a library for it, or am I out of luck?
I would heartily recommend that you take a look at coreos/etcd, hashicorp/consul or some other service discovery solution which exposes an HTTP interface and JSON data about the location of your services.
Since you cannot access the underlying networking devices from the browser (imagine if I could start probing SO's internal network from my external location), arguably, it takes as much time to set up as it would for you to write a proper Node.js application to discover resources on your network and expose these via JSON to your clients, but using proper service discovery solutions means you can take this to any kind of networking configuration your applications may be running in tomorrow under any kind of circumstances they might find themselves in whilst running (fiber optic cables got cut out between two centers, something hard fell down and broke the switch, something monopolized all the network bandwidth, the IP address of the service changes intermittently, etc.).
I'm playing around trying to find a way to communicate between two browsers on the same network to establish WebRTC without a server roundtrip (no STUN/ICE/TURN). Basically an alternative to the approach found here, where the "handshake" is done via copy/mail/pasting.
After sifting through all the cross-browser-communication examples I could find (like via cookies or WebTCP) plus a bunch of questions on SO (like here), I'm back to wondering a simple thing:
Question:
If Alice and Bob visit the same page foo.html while on the same network and they know each others' internal assigned IP addresses, are there any ways they can communicate purely with what is available on the browser?
This excludes non-standard APIs like Mozilla TCP_Socket_API, but other than that all "tricks" are allowed (img tags, iframes, cookies, etc.).
I'm just curious if I can listen to someone on the same network "broadcasting" something via the browser at all.
Edit:
foo.html will be on static server, no logic, no ICE, no shortcut.
Edit:
Still not a solution but a websocket server as Chrome extension comes closer. Example here: almost pure browser serverless WebRTC
Yes, you can establish a direct connection between two browsers over the local network using WebRTC. It requires the use of ICE, but that does not mean that an outside STUN or TURN server is needed. If the browsers are on the same network, ICE will succeed with only the local candidates of each browser.
STUN/TURN is needed only in order to guarantee that two endpoints can establish a connection even when they are in different networks and behind NATs.
In fact, if you use most of the WebRTC example applications (such as apprtc) with two browsers connected in a local network, ICE is most likely to select and use the pair of local addresses. In this case a channel allocation on a TURN server will be made, but it will not get used.
In your WebRTC application, you can disable the use of STUN/TURN by passing empty iceServers when you create the PeerConnection.
While the MDN documentation lists WebSocketServer as a client API, I don't think this is accurate (maybe they wanted to document there how to write a server).
At the moment, I know no standard way to create a server socket on a web browser. I know a couple of attacks to scan the local network but most of them rely on an active server outside the network, that is you connect to a server and get JavaScript back which opens a WebSocket connection. Via that connection, I can take full control over the client and have it open more WebSockets with local IP addresses to scan the internal network.
If internal web sites don't implement CORS correctly (see here), I can access all internal web sites where the current user is currently logged in. That is a devious attack vector which allows external attackers to browser internal documents without cracking anything. This page has a demo of the attack.
Even Flash won't let you create a server socket.
If you allow a Java applet and the Java version on the client is very old or the user blindly clicked "OK", then you can create server sockets.
Related:
Socket Server in Javascript (in browsers)?
This could be explained easily. The answer is it's not possible. In order for alice and bob to communicate at all without a third-party, at least one of them needs to be listening for incoming connections. Not possible using a standard web browser alone.
You can take a look at this
https://github.com/jed/browserver-client
I think that you can easily create an http server with javascript and send messages from one browser to another
With Nodejs you can achieve the same.
I have a desktop product which uses an embedded webserver which will use self-signed certs.
Is there something that I can put in a web page that would detect that they haven't added the root CA to their trusted list, and display a link or DIV or something directing them how to do it?
I'm thinking maybe a DIV that has instructions on install the CA, and a Javascript that runs some test (tries to access something without internal warnings??), and hides the DIV if the test succeeds. Or something like that...
Any ideas from the brilliant SO community ? :)
Why do you want to do this? It is a bad idea to train users to indiscriminately install root CA certificates just because a web site tells them to. You are undermining the entire chain of trust. A security conscious user would ignore your advice to install the certificate, and might conclude that you are not taking security seriously since you did not bother to acquire a certificate from an existing CA.
Do you really need HTTPS? If so, you should probably bite the bullet and make a deal with a CA to facilitate providing your customers with proper CA signed server certificates. If the web server is only used for local connections from the desktop app, you should either add the self-signed certificate to the trusted list as part of the installation process, or switch to HTTP instead.
Assuming you know C# and you want to install a pfx file.Create a exe that will be run from a url.Follow this URL
and this
The only idea I have is to use frames and some javascript.
The first element of the frame will act as a watchdog waiting x amount of time (javascript setTimeout) before showing your custom ssl failure message to the user with hyperlinks or instructions to download the self-signed cert.
The second frame element attempts the https connection and if successful resets the watchdog frame so that it never fires. If it fails (assume https cert validation failed) the watchdog message would then fire and be presented to the user.
Depending on your browser you will most likely still see some security warning with the approach but you would at least be able to push your own content without requiring users to run untrusted code with no proper trust chain (This would be much much worse from a security POV than accepting the cert validation errors and establishing an untrusted ssl session)
Improvements to the concept may be possible using other testing methods such as XMLHttpRequest et al.
You should not do this. Root certificates are not something you just install, since adding one could compromise any security given to you by https.
However if you are making a desktop app then just only listen to 127.0.0.1. That way the traffic never leaves the users computer and no attacker can listen in.
You might try to add some (hidden) Flex element or Java Applet once per user session.
It will just download any https page of your server and will get all information about connection:
com.sun.deploy.security.CertificateHostnameVerifier.verify()
or
javax.security.cert.X509Certificate.checkValidity()
I suppose Flex (which is more common to users) shoul have similar ways of validating https certificate from user's point of view. It should also share OS' trusted cert. store while Java might have its own.
Since the server is running on the client machine (desktop product) can it not check the supported browsers for installed certs using winapi/os functions? I know Firefox has a cert database in the user's profile directory and IE probably keeps information in the registry. It wouldn't be reliable for all browsers but if the server simply chooses between "Certificate Found" and "Please ensure you have installed the cert before continuing" then no harm is done as the user can choose to continue either way.
You could also simplify matters by providing an embedded browser as well (ie, gecko), this way you only have 1 browser to deal with which simplifies a lot of things (including pre-installing the root CA).
To recap: you are setting up webservers on desktop apps; each desktop will have its own webserver, but you want to use SSL to secure the connection to that webserver.
I guess there are several problems here with certificates, one being that the hostname used to access the desktop has to match the certificate. In this case you have little choice but to generate certificates on the client. You'll need to allow the user some way to specify the host name in case the name used by outsiders can't be detected from the host itself.
I'd also suggest allowing for an admin to install a trusted cert, for those who don't want to rely on self-signed certs. This way you can also offload the cost of trusted cert maintenance to the admins who really want it.
Finally, in my experience browsers either allow or refuse the self-signed cert and there is no way for the server to know if the cert is denied, or temporarily accepted, or permanently accepted. I assume there must be a mechanism somewhere to handle SSL failures but typical web programming doesn't operate at that layer. In any case, the only thing a webserver can do if SSL fails is to fallback to non-SSL, and you've indicated in a comment that you can't have anything non-SSL. I think you should try to have that restriction lifted; a non-SSL start page would be extremely helpful in this situation: it can test (using frames or images or JSON or AJAX) the https connection, and it can link to documentation about how to set up the certificate, or where to download an installer for the cert.
If the browser won't connect because of a self-signed cert, and you're not allowed to use plain HTTP at all, by what other means could you communicate with the user? There are no other channels and you can't establish one because you don't have any communication.
You mentioned in a comment writing a win32 app for installing the cert. You could install a cert at the time you install the application itself, but that doesn't help any remote browsers, and a local browser doesn't need SSL to access localhost.
We've been working on an opensource JavaScript project, called Forge, that's related to this problem. Do you have a website that your users could access? If so, then you could provide a secure connection to those desktop apps via your website using a combination of Flash for cross-domain + JavaScript for TLS. It will require you to implement some web services on your website to handle signing certificates the desktop app certificates (or having your desktop apps upload the self-signed certs so they can be accessed via JavaScript). We describe how it works here:
http://blog.digitalbazaar.com/2010/07/20/javascript-tls-1/
An alternative to setting up a website, but is less secure because it allows for a MiTM attack is to host the JavaScript+Flash directly on the desktop app server. You could have your users hit your desktop app over regular http to download the JS+Flash+SSL cert, but then start using TLS afterwards via the JS. If you're on a localhost connection the MiTM attack might be a little less worrisome -- perhaps enough for you to consider this option.
An ActiveX control could do the trick. But I really didn't chime in to help with the solution, more to disagree with the stance that what you are doing is a security risk.
To be clear, you are needing a secure cipher (hopefully AES and not DES), and are already in control of your endpoints, just not able to completely rule out promiscuous-mode network sniffers that could catch clear-text passwords or other sensitive data.
SSL is a "Secure Socket Layer", and by definition, is NOT dependent upon ANY certificates.
However, all effective modern ciphers require it to authenticate the tunnel endpoints, which is not always a necessity for every application; a frustration I have dealt with in numerous back-end datacenter automation routines using web service APIs to manage nodes, where the "users" were actually processes that needed encrypted key exchange prior to a RESTful command negotiation.
In my case, the VLANs were secured via ACLs, so I really "could" send clear-text authentication headers. But just typing that made me throw up in my mouth a little bit.
I'm sure I'll get flamed for typing this, but I'm extremely battle-hardened and would've made the same comments to you in years 10-15 of my IT career. So I empathize with their worries, and very much appreciate if they are passionate enough about security to flame me. They'll figure it out eventually.....
But I do agree with the fact that it is a BAD idea to "train" users to install root CA's on their own. On the other hand, if you use a self-signed cert, you have to train them to install that. And if a user doesn't know how to determine if a CA Cert is trustworthy, they definitely won't be able to determine a self-signed cert from a CA Cert, and thus either process would have the same effect.
If it were me, I would automate the process instead of having it assist the end-users, so that it becomes as hidden from them as possible, just like a proper PKI would do for an enterprise.
Speaking of which, I just thought of a potential solution. Use the Microsoft PKI Model. With Server 2012 R2, you can deliver trusted keys to endpoints that are not even domain members using "device control" via "workspaces", and the client machines can subscribe to multiple workspaces, so they are not committed solely to yours if they subscribe. Once they do, and authenticate, the AD Certificate Services Role will push all root CA Certs necessary, as are present in active directory, or specified LDAP server. (In case you are using offline CA servers)
Also, I realize this thread is like 7 years old, but am sure it still gets referenced by a good number of people needing similar solutions, and felt obligated to share a contrasting opinion. (Ok Microsoft, where's my kickback for the plug I gave you?)
-cashman