Ok, this is downright bizarre. I am building a web application that relies on long held HTTP connection using COMET, and using this to stream data from the server to the application.
Now, the problem is that this does not seem to go well with some anti-virus programs. We are now on beta, and some users are facing problems with the application when the anti-virus is enabled. It's not just one specific anti-virus either.. I found this work around for Avast when I looked online: http://avricot.com/blog/index.php?post/2009/05/20/Comet-and-ajax-with-Avast-s-shield-web-:-The-salvation-or-not
However, anyone here has any suggestions on how to handled this? Should I send any specific header to please these security programs?
This is a tough one. The kind of anti-virus feature that causes this tries to prevent malicious code running in the browser from uploading your personal data to a remote server. To do that, the anti-virus tries to buffer all outgoing traffic before it hits the network, and scan it for pre-defined strings.
This works when the application sends a complete HTTP request on the socket, because the anti-virus sees the end of the the HTTP request and knows that it can stop scanning and send the data.
In your case, there's probably just a header without a length field, so until you send enough data to fill the anti-virus's buffer, nothing will be written to the network.
If that's not a good reason to turn that particular feature off, I don't know what is. I ran into this with AVast and McAfee - at this point, the rest of the anti-virus industry is probably doing something like that. Specifically, I ran into this with McAfee's Personal Information Protection feature, which as far as I can tell, is simply too buggy to use.
If you can, just keep sending data on the socket, or send the data in HTTP messages that have a length field. I tried reporting this to a couple of anti-virus vendors - one of them fixed it, the other one didn't, to the best of my knowledge.
Of course, this sort of feature is completely useless. All a malicious application would need to do to get around it is to ROT13 the data before sending it.
Try using https instead of http. There are scanners that intercept https, too, but they're less common and the feature defaulted to off last time I checked. It also broke Firefox SSL connectivity when activated, so I think very few people will activate it and the vendor will hopefully kill the feature.
The problem is that some files can't be scanned in order - later parts are required to determine if the earlier parts are malicious.
So scanners have a problem with channels that are streaming data. I doubt your stream of data is able to be recognised as a clean file type, so the scanner is attempting to scan the data as best it can, and I guess holding up your stream in the process.
The only think I can suggest is to do the data transfer in small transactions, and use the COMET connection for notification only (closing each channel after a single notification).
If you use a non-standard port for your web requests, you may be able to work around this, there are a number of other issues, namely that this will be considered cross-domain by many browsers. Not sure if I have a better suggestion to offer here. It really depends on how the AV program intercepts a given port's traffic.
I think you're going to be forced to break the connection and reconnect. What does your code do if the connection goes down in an outage situation? I had a similar problem with a firewall once. The code had to detect the disconnect, then reconnect. I like the answer about breaking up the data transfer.
Related
I want to create a multiplayer game using JavaScript (no jQuery) and PHP, where most of the mechanics use AJAX calls. However, I need to determine when a user has left the game to update the player status on other players' screens (I assume by regular AJAX requests?). Also, once all players have left the game files (.txts) on the server need to be deleted.
I am using a free web hosting service which means I can't use WebSockets or cron jobs. I also don't want to use Node.js. Most of what I have read advise regularly timestamping with PHP sessions and this is fine, but I would like to know how to then check to see if the user/game has been inactive for a period of time.
Also, using window.onbeforeunload is too unreliable, in case browsers crash etc.
Don't use the wrong tool for the job.
Any kind of AJAX-based solution you attempt for this purpose is likely to very be inefficient, or unreliable, or probably both. If you have more than a tiny number of concurrent users, the sheer volume of AJAX requests would be likely to overwhelm the server and potentially bust your monthly quota. And as you've discovered, determining when someone has ended their session by closing the browser window is not straightforward or reliable. I would advise against any such architecture.
Websockets is really the correct solution for real-time or near-real-time updates between client and server (and vice versa). It'a also easy for the socket server to know when someone has disconnected (which would occur if they close the window/tab).
So you could either upgrade your hosting so you're able to run a websocket server successfully, or try to integrate a websocket based solution hosted elsewhere, e.g. Azure SignalR or some similar product (I am not making specific recommendations in an answer, as that's regarded as off-topic).
use an interval function and ping the php endpoint every second.
If it stops the User is out
We all know these online and offline events in the browser. It doesn't work very well (it does absolutely different thing).
Right now on our site it is implemented spamming our backend server every second with a request. I suggested to send head request to / of our domain and as it is Single Page Application, then it should be quite fast and no need to spam backend. But customer said that we can ping the gateway, THE FIRST POINT OF THE ISP.
I am not sure how to implement it in browser. First of all, not sure if it's easy to get in the browser ICP first point and then ping maybe disabled there, it is quite common practice.
Could you please suggest me anything ?
I've just completed investigation of the proposal of my customer: ping gateway, first point of the ISP. Browser doesn't provide us such capabilities, in general idea might be good, though even if ping of this ip address works ok, it doesn't guarantee that internet is working, but anyway browser doesn't provide such capabilities, and ping (ICMP call) might be disabled on this ip address, so I'd go with my first idea: I'll handle it on nginx level (to avoid backend burden) and let's see how it works.
Some odd requests appear on our logs since ~October 20, 2014. They've increased to about a few dozens a day so while not a big problem, it's still interesting to find out the reason.
Earlier ones:
REQUEST[/en/undefinedsf_main.jsp?clientVersion=null&dlsource=null&CTID=null&userId=userIdFail&statsReporter=false] REFERER[http://colnect.com/en/coins]
REQUEST[/fr/undefined/GoogleExtension/deals.html?url=http://colnect.com&subid=STERKLY&appName=HypeNet&pos=2&frameId=buaovbluurbavptkwyaybzjrqweypsbavwrviv] REFERER[http://colnect.com/fr]
REQUEST[/br/stamps/undefined49507173c45043eba6dfb9da540e52de&chnl=slmbBRex&evt=DailyPing&prd=vbates&seg=1&ext=1&rnd=65983fb77b62e25cc2a8ef15af18273d] REFERER[http://colnect.com/br/stamps/countries]
Some current ones:
REQ[/ru/collectors/collector/undefined] REF[http://colnect.com/ru/collectors/collector/jokitsos]
REQ[/th/collectors/collector/undefined] REF[http://colnect.com/th/collectors/collector/VRABEC]
REQUEST[/en/account/undefined] REFERER[http://colnect.com/en/account/request_password]
REQUEST[/pt/stamps/undefined] REFERER[http://colnect.com/pt/stamps/years]
Some requests are by logged in members and some not.
I'd guess some Javascript on their browser is trying to call a url by some uninitialized variable thus the "undefined".
Reasons may be similar to Odd requests to non-existing pages that all include "6_S3_" (perhaps malware) but I'm wondering if this might be a different reason.
I do doubt it's a bug on our client side Javascript as this would generate much more than a few dozens of such requests a day from about a million daily page views.
Any ideas? Is it worth pursuing?
This is a big concern, but it's not coming from you.
These are Javascript Injection attacks (client machine malware) using self-signed root certificate.
Specifically, sf_main.html, and deals.html have been linked to Superfish, which has been shipping with Lenovo's recently. As Lenovo has been pushing its new lines of PC's, reports of the attacks have blown up recently.
These Man-in-the-middle attacks start by hijacking the client's requests and and then injecting HTML and Javascript.
The reason there are so many undefined symbols is because Superfish, true to it's name, is fishing for plugins, extensions, and libraries it can take advantage of using their expected names, tokens, and paths. This is brute force XSS.
Oh, no, what can I do??
Little. Not much.
As the requests are being hijacked on the client machine and by http request hijacking, you won't know the difference. You could try to "fish" for certain kind of hostile "indicators" but now you are doing the work of Anti-Malware.
Lenovo claims that
SuperFish has completely disabled server side interactions (since
January) on all Lenovo products so that the software product is no
longer active, effectively disabling SuperFish for all products in the
market
While I trust the sincerity of China-based Lenovo, which has serious market interests in the Western world, I wouldn't trust the word of China based malware company Superfish.
These attacks are less a problem for you than your customers
Unless you work for a big bank or popular Social Networking site, it's highly unlikely that Malware like Superfish, has targeted you specifically. Your customer's bank and social network accounts are at risk, but not because of anything you did or can do to stop it.
As always, the cure for client side fishing attacks is good client side protection.
Any ideas?
There seem to be two different options here:
There are mistakes in your code causing incorrect URLs to get generated
There are (search) bots trying to parse your Javascript and failing to do it correctly.
(A client side extension is stirring up trouble)
To differentiate between the two you would need to set up more specific logging. For example adding the user agents to any log lines containing the string undefined will answer this question. If it's your code causing the problem you would also wish to log the referer header, as it will expose on which page the faulty URL's are being generated.
Another way to identify the issue is if you have an analytics solution running on your site such as Google Analytics you can quite easily limit your report to only url's containing undefined. If there are no such request you can conclude that it has to be a bot (as it wouldn't cause the client side analytics code to run), otherwise it provides all the information to identify where this problem is caused.
Lastly it might be a good idea to include a javascript error logging solution (in its simplest form a window.onerror handler with an ajax request to \log.something. If your code is generating undefined's it's quite likely that some errors are being triggered as well.
Is it worth pursuing?
If users are actually being served invalid pages, then yes, this is definitely something to be investigated.
We have a few staging environments for internal testing/dev that do not use "real" SSL certs. Honestly I'm a bit fuzzy on the details, but the bottom line is when accessing a subdomain on those environments, browser would prompt you to add a security exception along the lines of "You have asked Firefox to connect securely to example.com but we can't confirm that your connection is secure":
Could this be detected e.g. by making a request to the url in question and processing the error code/any other relevant information it may come back with? I could not find any specifications to indicate how this is being handled by the browser.
Edit:
I don't mind the error occurring on the landing page itself, it's pretty clear to the user. However some requests fail like this in the background (pulling css/js/other static content from different subdomains) but you don't know they do unless you go to net panel in firebug and open it in new tab and see the error...
The intention is not to circumvent this but rather to detect the issue and say something like "hey, these requests are failing, you can add security exceptions by going to these urls directly: [bunch of links]"
Checking the validity of the certificate is solely the responsibility of the client. Only it can know that it has to use HTTPS, and that it has to use it against a certificate that's valid for that host.
If the users don't make these checks and therefore put themselves in a position where a MITM attack could take place, you wouldn't necessarily be able to know about it. An active MITM attacker could answer perform the tasks you use to try to check the users are doing things correctly, but the legitimate users might not even get to know about it. This is quite similar to wanting to use redirections from http:// to https://: it works as long as there is no active MITM attack downgrading the connection.
(There is an exception to this, to make sure the client has seen the same handshake as you: when using client certificates. In this case, you would at least know that the client that ha authenticated with a cert would have seen your server cert and not a MITM cert, because of the signature at the end of the handshake. This is not really what you're looking for, though.)
JavaScript mechanisms generally won't let you check the certificate themselves. This being said, XHR requests to untrusted websites (with such warnings) will fail one way or another (generally via an exception): this could be a way to detect whether other pages than the landing page have are accessible by background requests (although you will certainly run into issues regarding Same Origin Policies).
Rather than using self-signed certificates for testing/development, you would be in a much better position if you deployed a test Certification Authority (CA). There are a number of tools to help you do this (which one to use would depend on the number of certificates you need). You would then have to import your own CA certificate into these browsers (or other clients), but the overall testing would be more realistic.
No.
That acceptance (or denial) only modifies a behavior in the client's browser (each browser, in a different way). It ACKs nothing to the server and the page is not yet loaded, therefore, there is no chance to catch that event.
what they do on this demo is exactly what i wanna do.
http://www.lightstreamer.com/demo/RoundTripDemo/
i wonder what comet technique they are using.
it cant be iframe cause on Firefox i can open two tabs with same link. with iframe u cant do that. and it cant be long polling with ajax cause i didnt see it polled anything with firebug.
someone knows the answer? (would be great with some link to good tutorials that do exactly the same thing with same technique).
Whilst digging through the obfuscated scripts is not something I fancy right now, judging by the contents of the page DOM it is posting data from a <form> inside a hidden <iframe> to send data to the server, and having the server send back <script> tags with code to pass data back to the caller.
This is a rather heavyweight and obtrusive technique. It was the only way of doing in-page server communication in the days before XMLHttpRequest existed; I typically wouldn't use it today.
(I wish WebSocket would hurry up and get implemented, doing away with all the long-polling nastiness.)
Looks like several techniques developed by Lightstream which include "vanilla" comet. A brief excerpt from the Lightstreamer white paper:
Each Lightstreamer client typically opens a single permanent connection
with Lightstreamer Server, on which the push updates relating to an
arbitrary number of items, frames and windows travel by means of
multiplexing techniques.
The white paper and demos are very interesting...
Once I developed a module for the Lighttpd web server. The module implemented a Full Duplex Ajax technique, very similar to Comet. In my blog posts you'll find everything you need about FDAjax / Comet, JavaScript examples, problems with firewalls and anti-virus programs, etc.
Lighttpd project seems to be dead. As far I know there is a similar module for the popular nginx. However in future we'll use web sockets.
BTW I used few HTTP addresses (www1.example.com, www2.example.com, ...) to work around the browsers limit of max two IP concurrent connections to the same web server. www[n] were in fact resolved to the same IP address. In case of possible lockup, a browser was automatically redirected to the next www[n] address.