internet connection availability in the browser - javascript

We all know these online and offline events in the browser. It doesn't work very well (it does absolutely different thing).
Right now on our site it is implemented spamming our backend server every second with a request. I suggested to send head request to / of our domain and as it is Single Page Application, then it should be quite fast and no need to spam backend. But customer said that we can ping the gateway, THE FIRST POINT OF THE ISP.
I am not sure how to implement it in browser. First of all, not sure if it's easy to get in the browser ICP first point and then ping maybe disabled there, it is quite common practice.
Could you please suggest me anything ?

I've just completed investigation of the proposal of my customer: ping gateway, first point of the ISP. Browser doesn't provide us such capabilities, in general idea might be good, though even if ping of this ip address works ok, it doesn't guarantee that internet is working, but anyway browser doesn't provide such capabilities, and ping (ICMP call) might be disabled on this ip address, so I'd go with my first idea: I'll handle it on nginx level (to avoid backend burden) and let's see how it works.

Related

Bidirectional server/client communication in JavaScript and PHP

I want to create a multiplayer game using JavaScript (no jQuery) and PHP, where most of the mechanics use AJAX calls. However, I need to determine when a user has left the game to update the player status on other players' screens (I assume by regular AJAX requests?). Also, once all players have left the game files (.txts) on the server need to be deleted.
I am using a free web hosting service which means I can't use WebSockets or cron jobs. I also don't want to use Node.js. Most of what I have read advise regularly timestamping with PHP sessions and this is fine, but I would like to know how to then check to see if the user/game has been inactive for a period of time.
Also, using window.onbeforeunload is too unreliable, in case browsers crash etc.
Don't use the wrong tool for the job.
Any kind of AJAX-based solution you attempt for this purpose is likely to very be inefficient, or unreliable, or probably both. If you have more than a tiny number of concurrent users, the sheer volume of AJAX requests would be likely to overwhelm the server and potentially bust your monthly quota. And as you've discovered, determining when someone has ended their session by closing the browser window is not straightforward or reliable. I would advise against any such architecture.
Websockets is really the correct solution for real-time or near-real-time updates between client and server (and vice versa). It'a also easy for the socket server to know when someone has disconnected (which would occur if they close the window/tab).
So you could either upgrade your hosting so you're able to run a websocket server successfully, or try to integrate a websocket based solution hosted elsewhere, e.g. Azure SignalR or some similar product (I am not making specific recommendations in an answer, as that's regarded as off-topic).
use an interval function and ping the php endpoint every second.
If it stops the User is out

Checking if a security exception has been accepted by the client

We have a few staging environments for internal testing/dev that do not use "real" SSL certs. Honestly I'm a bit fuzzy on the details, but the bottom line is when accessing a subdomain on those environments, browser would prompt you to add a security exception along the lines of "You have asked Firefox to connect securely to example.com but we can't confirm that your connection is secure":
Could this be detected e.g. by making a request to the url in question and processing the error code/any other relevant information it may come back with? I could not find any specifications to indicate how this is being handled by the browser.
Edit:
I don't mind the error occurring on the landing page itself, it's pretty clear to the user. However some requests fail like this in the background (pulling css/js/other static content from different subdomains) but you don't know they do unless you go to net panel in firebug and open it in new tab and see the error...
The intention is not to circumvent this but rather to detect the issue and say something like "hey, these requests are failing, you can add security exceptions by going to these urls directly: [bunch of links]"
Checking the validity of the certificate is solely the responsibility of the client. Only it can know that it has to use HTTPS, and that it has to use it against a certificate that's valid for that host.
If the users don't make these checks and therefore put themselves in a position where a MITM attack could take place, you wouldn't necessarily be able to know about it. An active MITM attacker could answer perform the tasks you use to try to check the users are doing things correctly, but the legitimate users might not even get to know about it. This is quite similar to wanting to use redirections from http:// to https://: it works as long as there is no active MITM attack downgrading the connection.
(There is an exception to this, to make sure the client has seen the same handshake as you: when using client certificates. In this case, you would at least know that the client that ha authenticated with a cert would have seen your server cert and not a MITM cert, because of the signature at the end of the handshake. This is not really what you're looking for, though.)
JavaScript mechanisms generally won't let you check the certificate themselves. This being said, XHR requests to untrusted websites (with such warnings) will fail one way or another (generally via an exception): this could be a way to detect whether other pages than the landing page have are accessible by background requests (although you will certainly run into issues regarding Same Origin Policies).
Rather than using self-signed certificates for testing/development, you would be in a much better position if you deployed a test Certification Authority (CA). There are a number of tools to help you do this (which one to use would depend on the number of certificates you need). You would then have to import your own CA certificate into these browsers (or other clients), but the overall testing would be more realistic.
No.
That acceptance (or denial) only modifies a behavior in the client's browser (each browser, in a different way). It ACKs nothing to the server and the page is not yet loaded, therefore, there is no chance to catch that event.

'Web' based push notifications for internal-only application

I'm already tossing around a solution but as I haven't done something like this before I wanted to check what SO thought before implementation.
Basically I need to modify an existing web based application that has approximately 20 users to add push notifications. It is important that the users get the notifications at the same time (PC-A shouldn't get an alert 20 seconds before PC-B). Currently the system works off of AJAX requests, sending to the server every 20 seconds and requesting any updates and completely rebuilding the table of data each time (even if data hasn't changed). This seems really sloppy so there's two methods I've come up with.
Don't break the connection from server-client. This idea I'm tossing around involves keeping the connection between server and client active the entire time. Bandwidth isn't really an issue with any solution as this is in an internal network for only approximately 20 people. With this solution the server could push Javascript to the client whenever there's an update and modify the table of data accordingly. Again, it's very important that every connected PC receives the updates as close to the same time as possible. The main drawback to this is my experience, I've never done it before so I'm not sure how well it'd work or if it's just generally a bad idea.
Continue with the AJAX request, but only respond in intervals. A second solution I've thought of would be to allow the clients to make AJAX requests as per usual (currently every 20 seconds) but have the server only respond in 30 second intervals (eg 2:00:00 and 2:00:30 regardless of how many AJAX requests it recieves in that span of time). This would require adjusting the timeout for the AJAX request to prevent the request timing out, but it sounds okay in theory, at least to me.
This is for an internal network only, so bandwidth isn't the primary concern, more so that the notification is received as close to each other as possible. I'm open to other ideas, those are just the two that I have thought of so far.
Edit
Primarily looking for pros and cons of each approach. DashK has another interesting approach but I'm wondering if anyone has experience with any of these methods and can attest to the strengths and weaknesses of each approach, or possibly another method.
If I understand well your needs I think you should take a look to Comet
Comet is a web application model in which a long-held HTTP request allows a web server to push data to a browser, without the browser explicitly requesting it. Comet is an umbrella term, encompassing multiple techniques for achieving this interaction. All these methods rely on features included by default in browsers, such as JavaScript, rather than on non-default plugins.
The Comet approach differs from the original model of the web, in which a browser requests a complete web page at a time.
How about using an XMPP server to solve the problem?
Originally designed to be an Instant Messaging platform, XMPP is a messaging protocol that enables users in the system to exchange messages. (There's more to this - But let's keep it simple.)
Let's simplify the scenario a little bit. Imagine the following:
You're a system admin. When the system
has a problem, you need to let all the
employees, about 20 of them, know that
the system is down.
In the old days, every employee will
ask you, "Is the system up?" every
hour or so, and you'll response
passively. While this works, you are
overloaded - Not by fixing system
outage, but by 20 people asking for
system status every hour.
Now, AIM is invented! Since every
employee has access to AIM, you
thought, "Hey, how about having every
single one of them join a 'System
Status' chat room, and I'll just send
a message to the room when the system
is down (or is back)?" By doing so,
employees who are interested in
knowing system status will simply join
the 'System Status' room, and will be
notified of system status update.
Back to the problem we're trying to solve...
System admin = "System" who wants to notify the web app users.
Employees = Web app users who wants to receive notification.
System Status chat room = Still, system Status chat room
When web app user signs on to your web app, make the page automatically logs them onto the XMPP server, and join the system status chat room.
When system wants to notify the user, write code to logon to the XMPP server, join the chat room, and broadcast a message to the room.
By using XMPP, you don't have to worry about:
Setting up "Lasting connection" - Some open source XMPP server, eJabberd/OpenFire, has built-in support for BOSH, XMPP's implementation of the Comet model.
How the message is delivered
You however will need the following:
Find a Javascript library that can help you to logon to an XMPP server. (Just Google. There're a lot.)
Find a XMPP library for the server-side code. (XMPP library exists for both Java & C#. But I'm not sure what system you're using behind the scene.)
Manually provision each user on the XMPP server (Seems like you only have 20 people. That should be easy - However, if the group grows bigger, you may want to perform auto-provisioning - Which is achievable through client-side Javascript XMPP library.)
As far as long-lasting AJAX calls, this implementation is limited by the at-most-2-connection-to-the-same-domain issue. If you used up one connection for this XMPP call, you only have 1 more connection to perform other AJAX calls in the web-app. Depending on how complex your webapp is, this may or may not be desirable, since if 2 AJAX calls have already been made, any subsequent AJAX call will have to wait until one of the AJAX pipeline freed up, which may cause "slowness" on your app.
You can fix this by converting all AJAX calls into XMPP messages, and have a bot-like user on the server to listen to those messages, and response to it by, say, sending back HTML snippets/JSON objects with the data. This however might be too much for what you're trying to achieve.
Ahh. Hope this makes sense... or not. :p
See http://ajaxpatterns.org/HTTP_Streaming
It allows You to push data from the server when server wants it. Not just after the query.
You could use this technique without making large changes to the current application, and synchronize output by the time on the server.
In addition to the other two great options above, you could look at Web Workers if you know they have latest Chrome, Safari, FF, or Opera for a browser.
A Worker has the added benefit of not operating in the same thread as the rest of the page, so performance will be better. The downside is that, for security purposes, you can only send string data between the two scripts and the worker does not have window or document context. However, JSON can be represented as a string, so there's really no limit to the data.
Workers can receive data multiple times and asynchronously. You set the onmessage handler to act each time it receives something.
If you can ask every user to use a specific browser (Latest Safari or Chrome), you can try WebSockets too.

Long held AJAX connections being blocked by Anti-Virus

Ok, this is downright bizarre. I am building a web application that relies on long held HTTP connection using COMET, and using this to stream data from the server to the application.
Now, the problem is that this does not seem to go well with some anti-virus programs. We are now on beta, and some users are facing problems with the application when the anti-virus is enabled. It's not just one specific anti-virus either.. I found this work around for Avast when I looked online: http://avricot.com/blog/index.php?post/2009/05/20/Comet-and-ajax-with-Avast-s-shield-web-:-The-salvation-or-not
However, anyone here has any suggestions on how to handled this? Should I send any specific header to please these security programs?
This is a tough one. The kind of anti-virus feature that causes this tries to prevent malicious code running in the browser from uploading your personal data to a remote server. To do that, the anti-virus tries to buffer all outgoing traffic before it hits the network, and scan it for pre-defined strings.
This works when the application sends a complete HTTP request on the socket, because the anti-virus sees the end of the the HTTP request and knows that it can stop scanning and send the data.
In your case, there's probably just a header without a length field, so until you send enough data to fill the anti-virus's buffer, nothing will be written to the network.
If that's not a good reason to turn that particular feature off, I don't know what is. I ran into this with AVast and McAfee - at this point, the rest of the anti-virus industry is probably doing something like that. Specifically, I ran into this with McAfee's Personal Information Protection feature, which as far as I can tell, is simply too buggy to use.
If you can, just keep sending data on the socket, or send the data in HTTP messages that have a length field. I tried reporting this to a couple of anti-virus vendors - one of them fixed it, the other one didn't, to the best of my knowledge.
Of course, this sort of feature is completely useless. All a malicious application would need to do to get around it is to ROT13 the data before sending it.
Try using https instead of http. There are scanners that intercept https, too, but they're less common and the feature defaulted to off last time I checked. It also broke Firefox SSL connectivity when activated, so I think very few people will activate it and the vendor will hopefully kill the feature.
The problem is that some files can't be scanned in order - later parts are required to determine if the earlier parts are malicious.
So scanners have a problem with channels that are streaming data. I doubt your stream of data is able to be recognised as a clean file type, so the scanner is attempting to scan the data as best it can, and I guess holding up your stream in the process.
The only think I can suggest is to do the data transfer in small transactions, and use the COMET connection for notification only (closing each channel after a single notification).
If you use a non-standard port for your web requests, you may be able to work around this, there are a number of other issues, namely that this will be considered cross-domain by many browsers. Not sure if I have a better suggestion to offer here. It really depends on how the AV program intercepts a given port's traffic.
I think you're going to be forced to break the connection and reconnect. What does your code do if the connection goes down in an outage situation? I had a similar problem with a firewall once. The code had to detect the disconnect, then reconnect. I like the answer about breaking up the data transfer.

what comet technique is this demo using?

what they do on this demo is exactly what i wanna do.
http://www.lightstreamer.com/demo/RoundTripDemo/
i wonder what comet technique they are using.
it cant be iframe cause on Firefox i can open two tabs with same link. with iframe u cant do that. and it cant be long polling with ajax cause i didnt see it polled anything with firebug.
someone knows the answer? (would be great with some link to good tutorials that do exactly the same thing with same technique).
Whilst digging through the obfuscated scripts is not something I fancy right now, judging by the contents of the page DOM it is posting data from a <form> inside a hidden <iframe> to send data to the server, and having the server send back <script> tags with code to pass data back to the caller.
This is a rather heavyweight and obtrusive technique. It was the only way of doing in-page server communication in the days before XMLHttpRequest existed; I typically wouldn't use it today.
(I wish WebSocket would hurry up and get implemented, doing away with all the long-polling nastiness.)
Looks like several techniques developed by Lightstream which include "vanilla" comet. A brief excerpt from the Lightstreamer white paper:
Each Lightstreamer client typically opens a single permanent connection
with Lightstreamer Server, on which the push updates relating to an
arbitrary number of items, frames and windows travel by means of
multiplexing techniques.
The white paper and demos are very interesting...
Once I developed a module for the Lighttpd web server. The module implemented a Full Duplex Ajax technique, very similar to Comet. In my blog posts you'll find everything you need about FDAjax / Comet, JavaScript examples, problems with firewalls and anti-virus programs, etc.
Lighttpd project seems to be dead. As far I know there is a similar module for the popular nginx. However in future we'll use web sockets.
BTW I used few HTTP addresses (www1.example.com, www2.example.com, ...) to work around the browsers limit of max two IP concurrent connections to the same web server. www[n] were in fact resolved to the same IP address. In case of possible lockup, a browser was automatically redirected to the next www[n] address.

Categories