I would like to embed on my website a german link which is free and open only to german public. If i open from another country i get "This show is not available in your country for legal reasons."
Now if use a VPN in the other country i get access to the link. My question is:
Is there any way to embed an iframe passing his link through a VPN simply using javascript or jquery?
Final result should be the filtered link visible on my website.
Thnx
Using a http proxy!
Limitation:
No cookies
Sessions are messed up
Some website are proxy detection
Self hosting only, regions limitation
Unfortunately, there is no proper technical way to get the information you want. If you invent some tests but those will have a very low correlation with the reality. So either you'll not catch those you want or you will have a larger number of false positives. Neither can be considered to make sense.
There is no difference between a request that has been routed through a VPN and one that has not. It's not practical to get an exhaustive list of VPN endpoint IP addresses, and many of them are shared with non-VPN users anyway.
Generating any kind of traffic backwards from Internet server in response to an incoming client (port scan or even a simple ping) is generally frowned upon. Or, in case of a port scan, it may be even worse for you, ex when the client lives behind a central corporate firewall, the worst of which is when the client comes from behind the central government network firewall pool...
Related
I need to get the serial number or some information that doesn't change from the user's device, I thought about getting IPV4 but depending on where the user is it can change and all the logic I tried to implement didn't work, I'm doing it in an MVC project .net 6.0 and this logic I'm trying to implement in C# but it would be possible in JavaScript I would also use this information to automate the user's login, using a security device already pre-registered by him
Disclaimer: I work at Fingerprint.
I would also use this information to automate the user's login, using
a security device already pre-registered by him
It might be a good idea to use a browser identifier (fingerprint/visitorId) as a decision point for further choices (e.g. whether to challenge a user with another factor or put some additional barriers). It's not a good idea to use a fingerprint/visitorId as a password replacement. There might be falsy results and this technology is not intended to replace passwords.
Moreover, I'd like to correct some misconceptions from the question and comments.
but from what I saw in the documentation to implement it, you need to
register an SSL address
Open source FingerprintJS is a pure client-side library. There are no HTTP APIs, servers, or requests. You don't perform any Subdomain setup whatsoever.
The Subdomain setup and SSL certificates are related to the Fingerprint Pro, it's a different service (take a look at Pro vs open source comparison). The Subdomain setup improves accuracy among other benefits. You can try the service on localhost without it. Moreover, with the Subdomain setup, you can develop your app on localhost without any limitations as well.
It will generate a hash unique to the browsing device
This is not correct, they are not unique at 100% cases. The accuracy of the open source FingerprintJS is ~60%. The accuracy of Fingerprint Pro is ~99.5%. Nevertheless, there might be some false positives/negatives. This is the main reason why it's not a good idea to use fingerprint/visitorId as a password replacement.
Recently I installed fiddler and it did allow me to view (decrypt) my Ssl requests from any browser.
Although its not legal, some firewalls also allow installing some root certificates and then firewall can monitor and track https protocol.
We provide sensitive information via ssl and how check and prevent such interception via proxy or fiddler kind of tool.
Is there any JavaScript API? There should be something, that I can check on page load.
$( function() {
if(!isSSLValid())
{
alert('Your traffic is monitored...');
location.href = '/SSLInstructions.html';
}
});
Think of it as mobile or tab browser where in iOS there is no way to view certificate, just an icon.
(This is quite similar to this question on Security.SE.)
As a client, you can verify that your SSL/TLS connection was not intercepted by a MITM proxy (Fiddler or other) by checking its certificate. That's the entire purpose of having certificates to authenticate servers.
You were only able to allow Fiddler to look at the traffic because you chose to validate its certificate. Similarly, MITM proxy servers (mostly used in corporate environments) need to install their CA certificate on the client machines. In environment where this happens, the clients are not really in control of the machine they use anyway: they delegate their administration to whoever controls that proxy.
It's ultimately the sole responsibility of the client to check that (a) SSL/TLS is used and (b) it is used correctly (with a certificate they can trust for the machine they intended to communicate with in the first place). (See this longer explanation on Webmasters.SE for more details.)
How to verify that ssl was not intercepted via proxy etc in browser?
Tell your users not to ignore warnings. If there's a corporate proxy with matching CA certificates installed on their machines, they could in principle look at the details of the certificate. If they don't trust the machine they're using for this, they should use their own, from a network that allows them not to be intercepted.
Mobile devices are indeed quite poor for checking those details unfortunately, but as a server there's not much you can do.
One way to check whether the client received the same server certificate as the one the server sent is to require client-certificate authentication, which will make the client sign the handshake (including the server cert) with its own private key, so the server can check if the signature matches what it expects. This requires a bit more infrastructure to deal with client-certificates (and you'd need to show your users how to use them).
EDIT: About your comment.
That's flaw in ssl that we have no way to check if anyone is watching
or not, and it defetes whole purpose of it.
Not really, it's ultimately always the responsibility of the user to check what/who it's talking to, even in real life situation. If you fill the right forms to vote by proxy and delegate this voting power to someone, or if you give your ID and delivery slip to someone to pick up a parcel for you from the post office, it's up to you to make sure that you trust that person. If you give your bank passwords to someone and they phone your bank for you, your bank has no way of telling whether it's you or not: as far as it's concerned you're identified by these credentials.
Only the user is in a position to check that it's talking to the right server: if not, the legitimate server isn't in contact with the user, so it simply can't give any warning that something wrong is happening.
You simply will never be able to force the users to talk to the right server from your server, because you don't control what they do. They could give their passwords to anyone they want, you wouldn't know. (You can teach them not to do so, at best.)
What you can do is to prevent your server to give data to someone who isn't your user. Following the real-life analogies, this can be done by using mechanisms where you insist on the person to be present (you don't allow someone to act for someone else, even if they turn up with the other person's ID). This can be done with SSL/TLS when using client-certificates: only the holder of the private key can be authenticated, no intermediate party. (Of course, from a practical point of view, users would have to make sure they don't give their private keys.)
We have a few staging environments for internal testing/dev that do not use "real" SSL certs. Honestly I'm a bit fuzzy on the details, but the bottom line is when accessing a subdomain on those environments, browser would prompt you to add a security exception along the lines of "You have asked Firefox to connect securely to example.com but we can't confirm that your connection is secure":
Could this be detected e.g. by making a request to the url in question and processing the error code/any other relevant information it may come back with? I could not find any specifications to indicate how this is being handled by the browser.
Edit:
I don't mind the error occurring on the landing page itself, it's pretty clear to the user. However some requests fail like this in the background (pulling css/js/other static content from different subdomains) but you don't know they do unless you go to net panel in firebug and open it in new tab and see the error...
The intention is not to circumvent this but rather to detect the issue and say something like "hey, these requests are failing, you can add security exceptions by going to these urls directly: [bunch of links]"
Checking the validity of the certificate is solely the responsibility of the client. Only it can know that it has to use HTTPS, and that it has to use it against a certificate that's valid for that host.
If the users don't make these checks and therefore put themselves in a position where a MITM attack could take place, you wouldn't necessarily be able to know about it. An active MITM attacker could answer perform the tasks you use to try to check the users are doing things correctly, but the legitimate users might not even get to know about it. This is quite similar to wanting to use redirections from http:// to https://: it works as long as there is no active MITM attack downgrading the connection.
(There is an exception to this, to make sure the client has seen the same handshake as you: when using client certificates. In this case, you would at least know that the client that ha authenticated with a cert would have seen your server cert and not a MITM cert, because of the signature at the end of the handshake. This is not really what you're looking for, though.)
JavaScript mechanisms generally won't let you check the certificate themselves. This being said, XHR requests to untrusted websites (with such warnings) will fail one way or another (generally via an exception): this could be a way to detect whether other pages than the landing page have are accessible by background requests (although you will certainly run into issues regarding Same Origin Policies).
Rather than using self-signed certificates for testing/development, you would be in a much better position if you deployed a test Certification Authority (CA). There are a number of tools to help you do this (which one to use would depend on the number of certificates you need). You would then have to import your own CA certificate into these browsers (or other clients), but the overall testing would be more realistic.
No.
That acceptance (or denial) only modifies a behavior in the client's browser (each browser, in a different way). It ACKs nothing to the server and the page is not yet loaded, therefore, there is no chance to catch that event.
I understand that websocket is still being worked on. Now, I don't know if what I'm considering is even technically possible but I'm just bouncing off ideas.
What I'm thinking of is a client less SSL VPN using websockets. Is it possible to create a websocket & redirect all the traffic from the browser (on that particular site/domain) through this socket. So lets us say you go to a site http://example.com & this site will set up a websocket back to it's server. Now can we in any way capture all the traffic going from that browser tab & push it through that websocket tunnel (wss://). This way you can have a client less SSL VPN solution.
Now, the biggest problem I can see is how do you actually grab all the traffic going from that browser tab or window. I don't think javascript has or will have enough privileges or even capabilities to do that. Any thoughts?
You could present your own browser UI (URL bar + rendering area), push out HTTP requests over your tunnel and parse & present the returned HTML in the rendering area. But you are correct, you aren't going to be able to capture all browser traffic in javascript without somehow escalating privileges (for example, as a Firefox extension).
A web proxy is really what you are describing: http://en.wikipedia.org/wiki/Proxy_server
All browsers have support for HTTP proxy server settings. If the proxy encapsulated the data with SSL and sent it on to another proxy within the firewall (I assume that's why you mention VPN) then I think you have what you are asking. I don't think WebSockets really has any relevance here. You could use it, but it would just be harder.
I have a desktop product which uses an embedded webserver which will use self-signed certs.
Is there something that I can put in a web page that would detect that they haven't added the root CA to their trusted list, and display a link or DIV or something directing them how to do it?
I'm thinking maybe a DIV that has instructions on install the CA, and a Javascript that runs some test (tries to access something without internal warnings??), and hides the DIV if the test succeeds. Or something like that...
Any ideas from the brilliant SO community ? :)
Why do you want to do this? It is a bad idea to train users to indiscriminately install root CA certificates just because a web site tells them to. You are undermining the entire chain of trust. A security conscious user would ignore your advice to install the certificate, and might conclude that you are not taking security seriously since you did not bother to acquire a certificate from an existing CA.
Do you really need HTTPS? If so, you should probably bite the bullet and make a deal with a CA to facilitate providing your customers with proper CA signed server certificates. If the web server is only used for local connections from the desktop app, you should either add the self-signed certificate to the trusted list as part of the installation process, or switch to HTTP instead.
Assuming you know C# and you want to install a pfx file.Create a exe that will be run from a url.Follow this URL
and this
The only idea I have is to use frames and some javascript.
The first element of the frame will act as a watchdog waiting x amount of time (javascript setTimeout) before showing your custom ssl failure message to the user with hyperlinks or instructions to download the self-signed cert.
The second frame element attempts the https connection and if successful resets the watchdog frame so that it never fires. If it fails (assume https cert validation failed) the watchdog message would then fire and be presented to the user.
Depending on your browser you will most likely still see some security warning with the approach but you would at least be able to push your own content without requiring users to run untrusted code with no proper trust chain (This would be much much worse from a security POV than accepting the cert validation errors and establishing an untrusted ssl session)
Improvements to the concept may be possible using other testing methods such as XMLHttpRequest et al.
You should not do this. Root certificates are not something you just install, since adding one could compromise any security given to you by https.
However if you are making a desktop app then just only listen to 127.0.0.1. That way the traffic never leaves the users computer and no attacker can listen in.
You might try to add some (hidden) Flex element or Java Applet once per user session.
It will just download any https page of your server and will get all information about connection:
com.sun.deploy.security.CertificateHostnameVerifier.verify()
or
javax.security.cert.X509Certificate.checkValidity()
I suppose Flex (which is more common to users) shoul have similar ways of validating https certificate from user's point of view. It should also share OS' trusted cert. store while Java might have its own.
Since the server is running on the client machine (desktop product) can it not check the supported browsers for installed certs using winapi/os functions? I know Firefox has a cert database in the user's profile directory and IE probably keeps information in the registry. It wouldn't be reliable for all browsers but if the server simply chooses between "Certificate Found" and "Please ensure you have installed the cert before continuing" then no harm is done as the user can choose to continue either way.
You could also simplify matters by providing an embedded browser as well (ie, gecko), this way you only have 1 browser to deal with which simplifies a lot of things (including pre-installing the root CA).
To recap: you are setting up webservers on desktop apps; each desktop will have its own webserver, but you want to use SSL to secure the connection to that webserver.
I guess there are several problems here with certificates, one being that the hostname used to access the desktop has to match the certificate. In this case you have little choice but to generate certificates on the client. You'll need to allow the user some way to specify the host name in case the name used by outsiders can't be detected from the host itself.
I'd also suggest allowing for an admin to install a trusted cert, for those who don't want to rely on self-signed certs. This way you can also offload the cost of trusted cert maintenance to the admins who really want it.
Finally, in my experience browsers either allow or refuse the self-signed cert and there is no way for the server to know if the cert is denied, or temporarily accepted, or permanently accepted. I assume there must be a mechanism somewhere to handle SSL failures but typical web programming doesn't operate at that layer. In any case, the only thing a webserver can do if SSL fails is to fallback to non-SSL, and you've indicated in a comment that you can't have anything non-SSL. I think you should try to have that restriction lifted; a non-SSL start page would be extremely helpful in this situation: it can test (using frames or images or JSON or AJAX) the https connection, and it can link to documentation about how to set up the certificate, or where to download an installer for the cert.
If the browser won't connect because of a self-signed cert, and you're not allowed to use plain HTTP at all, by what other means could you communicate with the user? There are no other channels and you can't establish one because you don't have any communication.
You mentioned in a comment writing a win32 app for installing the cert. You could install a cert at the time you install the application itself, but that doesn't help any remote browsers, and a local browser doesn't need SSL to access localhost.
We've been working on an opensource JavaScript project, called Forge, that's related to this problem. Do you have a website that your users could access? If so, then you could provide a secure connection to those desktop apps via your website using a combination of Flash for cross-domain + JavaScript for TLS. It will require you to implement some web services on your website to handle signing certificates the desktop app certificates (or having your desktop apps upload the self-signed certs so they can be accessed via JavaScript). We describe how it works here:
http://blog.digitalbazaar.com/2010/07/20/javascript-tls-1/
An alternative to setting up a website, but is less secure because it allows for a MiTM attack is to host the JavaScript+Flash directly on the desktop app server. You could have your users hit your desktop app over regular http to download the JS+Flash+SSL cert, but then start using TLS afterwards via the JS. If you're on a localhost connection the MiTM attack might be a little less worrisome -- perhaps enough for you to consider this option.
An ActiveX control could do the trick. But I really didn't chime in to help with the solution, more to disagree with the stance that what you are doing is a security risk.
To be clear, you are needing a secure cipher (hopefully AES and not DES), and are already in control of your endpoints, just not able to completely rule out promiscuous-mode network sniffers that could catch clear-text passwords or other sensitive data.
SSL is a "Secure Socket Layer", and by definition, is NOT dependent upon ANY certificates.
However, all effective modern ciphers require it to authenticate the tunnel endpoints, which is not always a necessity for every application; a frustration I have dealt with in numerous back-end datacenter automation routines using web service APIs to manage nodes, where the "users" were actually processes that needed encrypted key exchange prior to a RESTful command negotiation.
In my case, the VLANs were secured via ACLs, so I really "could" send clear-text authentication headers. But just typing that made me throw up in my mouth a little bit.
I'm sure I'll get flamed for typing this, but I'm extremely battle-hardened and would've made the same comments to you in years 10-15 of my IT career. So I empathize with their worries, and very much appreciate if they are passionate enough about security to flame me. They'll figure it out eventually.....
But I do agree with the fact that it is a BAD idea to "train" users to install root CA's on their own. On the other hand, if you use a self-signed cert, you have to train them to install that. And if a user doesn't know how to determine if a CA Cert is trustworthy, they definitely won't be able to determine a self-signed cert from a CA Cert, and thus either process would have the same effect.
If it were me, I would automate the process instead of having it assist the end-users, so that it becomes as hidden from them as possible, just like a proper PKI would do for an enterprise.
Speaking of which, I just thought of a potential solution. Use the Microsoft PKI Model. With Server 2012 R2, you can deliver trusted keys to endpoints that are not even domain members using "device control" via "workspaces", and the client machines can subscribe to multiple workspaces, so they are not committed solely to yours if they subscribe. Once they do, and authenticate, the AD Certificate Services Role will push all root CA Certs necessary, as are present in active directory, or specified LDAP server. (In case you are using offline CA servers)
Also, I realize this thread is like 7 years old, but am sure it still gets referenced by a good number of people needing similar solutions, and felt obligated to share a contrasting opinion. (Ok Microsoft, where's my kickback for the plug I gave you?)
-cashman