Requests to non-existing pages that all include "undefined” - javascript

Some odd requests appear on our logs since ~October 20, 2014. They've increased to about a few dozens a day so while not a big problem, it's still interesting to find out the reason.
Earlier ones:
REQUEST[/en/undefinedsf_main.jsp?clientVersion=null&dlsource=null&CTID=null&userId=userIdFail&statsReporter=false] REFERER[http://colnect.com/en/coins]
REQUEST[/fr/undefined/GoogleExtension/deals.html?url=http://colnect.com&subid=STERKLY&appName=HypeNet&pos=2&frameId=buaovbluurbavptkwyaybzjrqweypsbavwrviv] REFERER[http://colnect.com/fr]
REQUEST[/br/stamps/undefined49507173c45043eba6dfb9da540e52de&chnl=slmbBRex&evt=DailyPing&prd=vbates&seg=1&ext=1&rnd=65983fb77b62e25cc2a8ef15af18273d] REFERER[http://colnect.com/br/stamps/countries]
Some current ones:
REQ[/ru/collectors/collector/undefined] REF[http://colnect.com/ru/collectors/collector/jokitsos]
REQ[/th/collectors/collector/undefined] REF[http://colnect.com/th/collectors/collector/VRABEC]
REQUEST[/en/account/undefined] REFERER[http://colnect.com/en/account/request_password]
REQUEST[/pt/stamps/undefined] REFERER[http://colnect.com/pt/stamps/years]
Some requests are by logged in members and some not.
I'd guess some Javascript on their browser is trying to call a url by some uninitialized variable thus the "undefined".
Reasons may be similar to Odd requests to non-existing pages that all include "6_S3_" (perhaps malware) but I'm wondering if this might be a different reason.
I do doubt it's a bug on our client side Javascript as this would generate much more than a few dozens of such requests a day from about a million daily page views.
Any ideas? Is it worth pursuing?

This is a big concern, but it's not coming from you.
These are Javascript Injection attacks (client machine malware) using self-signed root certificate.
Specifically, sf_main.html, and deals.html have been linked to Superfish, which has been shipping with Lenovo's recently. As Lenovo has been pushing its new lines of PC's, reports of the attacks have blown up recently.
These Man-in-the-middle attacks start by hijacking the client's requests and and then injecting HTML and Javascript.
The reason there are so many undefined symbols is because Superfish, true to it's name, is fishing for plugins, extensions, and libraries it can take advantage of using their expected names, tokens, and paths. This is brute force XSS.
Oh, no, what can I do??
Little. Not much.
As the requests are being hijacked on the client machine and by http request hijacking, you won't know the difference. You could try to "fish" for certain kind of hostile "indicators" but now you are doing the work of Anti-Malware.
Lenovo claims that
SuperFish has completely disabled server side interactions (since
January) on all Lenovo products so that the software product is no
longer active, effectively disabling SuperFish for all products in the
market
While I trust the sincerity of China-based Lenovo, which has serious market interests in the Western world, I wouldn't trust the word of China based malware company Superfish.
These attacks are less a problem for you than your customers
Unless you work for a big bank or popular Social Networking site, it's highly unlikely that Malware like Superfish, has targeted you specifically. Your customer's bank and social network accounts are at risk, but not because of anything you did or can do to stop it.
As always, the cure for client side fishing attacks is good client side protection.

Any ideas?
There seem to be two different options here:
There are mistakes in your code causing incorrect URLs to get generated
There are (search) bots trying to parse your Javascript and failing to do it correctly.
(A client side extension is stirring up trouble)
To differentiate between the two you would need to set up more specific logging. For example adding the user agents to any log lines containing the string undefined will answer this question. If it's your code causing the problem you would also wish to log the referer header, as it will expose on which page the faulty URL's are being generated.
Another way to identify the issue is if you have an analytics solution running on your site such as Google Analytics you can quite easily limit your report to only url's containing undefined. If there are no such request you can conclude that it has to be a bot (as it wouldn't cause the client side analytics code to run), otherwise it provides all the information to identify where this problem is caused.
Lastly it might be a good idea to include a javascript error logging solution (in its simplest form a window.onerror handler with an ajax request to \log.something. If your code is generating undefined's it's quite likely that some errors are being triggered as well.
Is it worth pursuing?
If users are actually being served invalid pages, then yes, this is definitely something to be investigated.

Related

Transmission of information solely for receipt by browser environment?

As part of a thought experiment, I am attempting to ascertain whether there is any hope in a server providing a piece of data only for receipt and use by a browser environment, i.e. which could not be read by a bot crawling my site.
Clearly, if that information is sent in the source code, or indeed via any usual HTTP means, this can be picked up by a bot - so far, so simple.
But what about if the information was transmitted by the server instead as a websocket message: Wouldn't this be receivable only by some corresponding (and possibly authenticated) JavaScript in the browser environment, thus precluding its interception by a bot?
(This is based on my assumption that a bot has no client environment and is essentially a malicious server-side script calling a site over something like cURL, pretending to be a user).
Another way of phrasing this question might be: with the web implementation of websockets, is the receipt of messages always done by a client environment (i.e. JS)?
I can't answer about websockets, but a sufficiently motivated attacker will find a way to emulate whatever environment you require. By loading this content through ajax, you can eliminate the casual bots. You can eliminate well behaved bots with robots.txt.
Using WebSocket makes no difference. You cannot escape the following fact: you can always write a non-browser client that looks and behaves to the server exactly as any standard browser.
I can fake: any HTTP headers (like browser vendor etc) you might read. The origin header doesn't help either (I can fake it). Neither does cookies. I'll read them and give it back.
You might get away by protecting your site with strong captchas, and set cookies only after the captcha was solved. That depends on the captcha being unsolvable by bots ..

Checking if a security exception has been accepted by the client

We have a few staging environments for internal testing/dev that do not use "real" SSL certs. Honestly I'm a bit fuzzy on the details, but the bottom line is when accessing a subdomain on those environments, browser would prompt you to add a security exception along the lines of "You have asked Firefox to connect securely to example.com but we can't confirm that your connection is secure":
Could this be detected e.g. by making a request to the url in question and processing the error code/any other relevant information it may come back with? I could not find any specifications to indicate how this is being handled by the browser.
Edit:
I don't mind the error occurring on the landing page itself, it's pretty clear to the user. However some requests fail like this in the background (pulling css/js/other static content from different subdomains) but you don't know they do unless you go to net panel in firebug and open it in new tab and see the error...
The intention is not to circumvent this but rather to detect the issue and say something like "hey, these requests are failing, you can add security exceptions by going to these urls directly: [bunch of links]"
Checking the validity of the certificate is solely the responsibility of the client. Only it can know that it has to use HTTPS, and that it has to use it against a certificate that's valid for that host.
If the users don't make these checks and therefore put themselves in a position where a MITM attack could take place, you wouldn't necessarily be able to know about it. An active MITM attacker could answer perform the tasks you use to try to check the users are doing things correctly, but the legitimate users might not even get to know about it. This is quite similar to wanting to use redirections from http:// to https://: it works as long as there is no active MITM attack downgrading the connection.
(There is an exception to this, to make sure the client has seen the same handshake as you: when using client certificates. In this case, you would at least know that the client that ha authenticated with a cert would have seen your server cert and not a MITM cert, because of the signature at the end of the handshake. This is not really what you're looking for, though.)
JavaScript mechanisms generally won't let you check the certificate themselves. This being said, XHR requests to untrusted websites (with such warnings) will fail one way or another (generally via an exception): this could be a way to detect whether other pages than the landing page have are accessible by background requests (although you will certainly run into issues regarding Same Origin Policies).
Rather than using self-signed certificates for testing/development, you would be in a much better position if you deployed a test Certification Authority (CA). There are a number of tools to help you do this (which one to use would depend on the number of certificates you need). You would then have to import your own CA certificate into these browsers (or other clients), but the overall testing would be more realistic.
No.
That acceptance (or denial) only modifies a behavior in the client's browser (each browser, in a different way). It ACKs nothing to the server and the page is not yet loaded, therefore, there is no chance to catch that event.

Can I duplicate server-side functionality without being able to use server-side tech?

I have recently taken a position at a large corporation as a Web Developer for one of the company's divisions. For my first task I have been asked to create a web form that submits data to a database and then outputs the id# of that data to the user for reference later. Easy, right? Unfortunately not. Because this is a large company that has been around for a long time their systems are relatively antiquated and none of their servers support server-side technologies (PHP, ASP etc...) and since they are such a large company Corporate IT is pretty much a black hole and there is not any hope of actually getting such tech implemented.
SO! To my question... is there ANY way to do this without server-side? To me the answer is 'no' and I have spent the last week researching on sites like this and others without finding any miraculous work arounds. Really all I have at my disposal are things I can implement without involving IT, so things I can just upload to a web-server.
Also as a note: The web server it is on is supposedly an IBM Web Server (IHS) and the database I am supposed to be connecting to is a MS Access database and the company restricts us to using IE for any web access. As this form is on an internal company INTRAnet site IE is the only browser it will be accessed from.
I know this is a ridiculous situation but unfortunately that is what I am stuck with. Any ideas???
You must have something that takes form data and transforms it for insertion to the database.
There are no javascript libraries that will do this from the browser directly to database (security issues in traversing the network, cross domain issues etc...).
Something will be serving up the web pages - surely this can be the basis of the server side coding you need.
Seeing as you are using IBM HTTP Server (gleaned from comments on your question), there are server side scripting technologies available to you.
Maybe you could create a Web Database with Access Services?
Also as a note: The database I am supposed to be connecting to is a MS Access database and the company restricts us to using IE for any web access. As this form is on an internal company INTRAnet site IE is the only browser it will be accessed from.
That's easy. Use a dirty ActiveX hack to talk toe MS Access directly from the browser.
That's going to be a nightmare to code, but it'll work.
You didn't say which version of Access you're using; this page has information on how to set this up for Access 2003, click on "data access pages".
It's probably better in the long run if you don't solve this problem. Management frustration with IT may help you effect change, or at least get you permission to set up a local web server so you can demonstrate what's possible with the right support.

Long held AJAX connections being blocked by Anti-Virus

Ok, this is downright bizarre. I am building a web application that relies on long held HTTP connection using COMET, and using this to stream data from the server to the application.
Now, the problem is that this does not seem to go well with some anti-virus programs. We are now on beta, and some users are facing problems with the application when the anti-virus is enabled. It's not just one specific anti-virus either.. I found this work around for Avast when I looked online: http://avricot.com/blog/index.php?post/2009/05/20/Comet-and-ajax-with-Avast-s-shield-web-:-The-salvation-or-not
However, anyone here has any suggestions on how to handled this? Should I send any specific header to please these security programs?
This is a tough one. The kind of anti-virus feature that causes this tries to prevent malicious code running in the browser from uploading your personal data to a remote server. To do that, the anti-virus tries to buffer all outgoing traffic before it hits the network, and scan it for pre-defined strings.
This works when the application sends a complete HTTP request on the socket, because the anti-virus sees the end of the the HTTP request and knows that it can stop scanning and send the data.
In your case, there's probably just a header without a length field, so until you send enough data to fill the anti-virus's buffer, nothing will be written to the network.
If that's not a good reason to turn that particular feature off, I don't know what is. I ran into this with AVast and McAfee - at this point, the rest of the anti-virus industry is probably doing something like that. Specifically, I ran into this with McAfee's Personal Information Protection feature, which as far as I can tell, is simply too buggy to use.
If you can, just keep sending data on the socket, or send the data in HTTP messages that have a length field. I tried reporting this to a couple of anti-virus vendors - one of them fixed it, the other one didn't, to the best of my knowledge.
Of course, this sort of feature is completely useless. All a malicious application would need to do to get around it is to ROT13 the data before sending it.
Try using https instead of http. There are scanners that intercept https, too, but they're less common and the feature defaulted to off last time I checked. It also broke Firefox SSL connectivity when activated, so I think very few people will activate it and the vendor will hopefully kill the feature.
The problem is that some files can't be scanned in order - later parts are required to determine if the earlier parts are malicious.
So scanners have a problem with channels that are streaming data. I doubt your stream of data is able to be recognised as a clean file type, so the scanner is attempting to scan the data as best it can, and I guess holding up your stream in the process.
The only think I can suggest is to do the data transfer in small transactions, and use the COMET connection for notification only (closing each channel after a single notification).
If you use a non-standard port for your web requests, you may be able to work around this, there are a number of other issues, namely that this will be considered cross-domain by many browsers. Not sure if I have a better suggestion to offer here. It really depends on how the AV program intercepts a given port's traffic.
I think you're going to be forced to break the connection and reconnect. What does your code do if the connection goes down in an outage situation? I had a similar problem with a firewall once. The code had to detect the disconnect, then reconnect. I like the answer about breaking up the data transfer.

how to prevent JS hijacking in public computers

This problem is regarding a JS hijacking scenario, and here it is :
Say Mr. Good has a website called "iamtooinnocent.com" which loads a "x.js" file to perform some particular tasks, and Mr. Bad is an evil cyber cafe owner, who has set a redirect rule in place that whenever any surfer using his cyber cafe visits Good's website then when the "x.js" file will be requested it will simply redirect it to some other evil domain which serves say a different "x.js" file with evil code in it, this way Good's website will never come to know that it has got a different JS file than what it has requested.
I hope I have explained the scenario properly, so my problem is how can this be prevented? Is there really a way to prevent this? Can this be prevented by serving the JS file using HTTPS, though I am not so sure? Can anybody give me some heads up regarding this?
Thanks in advance.
HTTPS is standard for fighting man-in-the-middle attacks like one you've described. It encrypts all traffic using public certificate of your site. So it's not possible to change it. And the certificate itself is verified by third party certificate authorities.
But it can't guarantee 100% security because it's possible to create a local fake certificate authorities available only in cafe.
If the computer owner is against you....you will have a hard time. The browser guarantees certain security rules, but the computer owner can modify it to his evil heart's content and you would be none the wiser...
Rule #1 in web security boils down to: NEVER trust the client.
Remember that clients can do just about anything with the data you are sending them, and the data they send YOU:
modify cookies for subsequent requests
alter or add/remove other HTTP headers, spoof User Agents
Specify any combination of data in GET/POST
You should assume any data coming IN from HTTP to your application is a malicious, tained, evil mess, and sanitize accordingly.
Is this the sort of cyber cafe where they provide the computers? If so, you just have to trust the owner, because you can't have security on somebody else's machine. If nothing else, they can install a hardware keylogger.
If this is the sort where they provide a wireless connection and you bring your laptop, HTTPS should be a safeguard. If your browser handles certificates and SSL properly, it should be possible to go to a site that has a verified certificate and be safe. If there's any problems in your browser, of course, the cyber cafe owner is in an ideal position to take advantage of it, so you might want to keep an eye on known vulnerabilities.
The best move is not to patronize cyber cafes run by evil owners, but that can be difficult in some parts of the world.

Categories