use javascript to test if server is reachable - javascript

I have a javascript application that loads data asynchronously from a server within my company's firewall. We sometimes need to show this application outside of the firewall but it is not always possible to get on the VPN. For this reason, we configured the firewall to allow public requests for this data on a specific port. The catch: when I am inside the corporate network, I can only load data using the server's DNS name due to a peculiar configuration of the firewall. Outside of the corp network, I have to use the server's public IP address..
Currently we distribute two versions of the javascript application -- one which works within the firewall and one that works outside of it. We would like to distribute one app -- one that tests the URLs and then, depending on which is reachable, continue to use that address to load data.
I have been using jQuery's $.ajax() call to load the data and I noticed there is a timeout parameter. I thought I could use a short timeout to determine which server is unreachable.. However, this "countdown" doesn't seem to start until the initial connection to the server is made.
Any thoughts on how to determine, in javascript, which of two servers is reachable?

Use the error event:
$.ajax({
url: dnsUrl,
success: ... // Normal operation
error: function () {
$.ajax({
url: ipUrl,
success: ... // Normal operation
});
}
});

You may put some dummy-images on the server and try to load them. the onload-event of the image that was successfully loaded should fire.

Related

Ajax query failing due to OIDC SSO redirect

I'm wondering what the standard solution is to ajax failing due to (OIDC) SSO redirect/refresh. It's insidious because the failure is silent, the web page appears to be working properly.
I have a simple web page with static HTML and simple javascript that populates the page via ajax and refreshes it periodically.
The webpage and JS are both access-controlled via the same OIDC SSO. This works, but can fail in the following ways when the ajax call is rejected 401 due to needing an authentication refresh. (This is not full password authentication, this is just "check that my token is ok, and see that it is, and keep going as if nothing had happened".)
Back end and front end are both served from the same server by a basic Apache service with the same Access Control and Authorization requirements.
If a user navigates to the page in such a way that a cached version of the HTML is loaded and just the ajax runs. (e.g. back button)
If the page is left sitting for long enough, I believe it refreshes will also fail for the same reason.
I have worked around the issue as shown below, but it feels like a hack, like there must be some much more standard way to do this.
// This function is called every 30s on a timer
function updateData(args, callback) {
$.ajax({
xhrFields: { withCredentials: true },
url: "/getit?" + args,
success: function(data) {
localStorage.removeItem('pagereloaded')
callback(data);
},
statusCode: {
// Reload is because SSO tokens can timeout causing ajax to fail.
// In these cases we want to reload the page right away.
// But what if it's just a genuine auth failure, we do not want to go into an infinite reload loop.
// So pagereloaded and timer try to reload quickly a first time, but then avoid a loop after that.
401: function () {
if (localStorage.getItem('pagereloaded') == null || (Date.now() - start_time) > 60000) {
localStorage.setItem("pagereloaded", 1)
location.reload();
}
}
}
});
}
WEB AND API MODULES
Sounds like the Apache module you are using is intended only for website requests, and is not intended for direct API requests. The more standard way to deal with this is via separate modules that are tailored to their clients - something like this:
PATH: https://example.com/www/** --> uses an OIDC module to verify credentials during web requests
PATH: https://example.com/api/** --> uses an OAuth module to verify credentials during API requests
If you search around you will see that there are a number of Apache modules available, some of which deal with security for web requests and some of which deal with security for API requests.
BEHAVIOUR
An API module should enable its clients to distinguish missing, invalid or expired API credential errors (usually classified as 401s) from other types of error. In normal usage, 401s should only occur in applications when access or refresh tokens expire, or, in some cases, when cookie encryption or token signing keys are renewed. These error cases are not permanent and re-authenticating the user will fix them.
Other types of error should return a different status code to the client, such as 400 or 500, and the client should display an error. As an example, if a client secret is misconfigured, it is a permanent error, and re-authenticating the user will not fix the problem. Instead it would result in a redirect loop. By testing these error conditions you will be satisfied that the behaviour is correct.
UPDATED CODE
You can then write simple code, perhaps as follows. The client side code should be in full control over behaviour after a 401. Whether you reload the page or just stop making background requests is up to you to decide.
function updateData(args, callback) {
$.ajax({
url: "/api/getit?" + args,
success: function(data) {
callback(data);
},
statusCode: {
401: function () {
location.reload();
}
}
});
}
Note also that the withCredentials flag is only needed for cross origin requests, such as those https://www.example.com to https://api.example.com, so I have omitted it from the above code, since it sounds like you have a same domain setup.

Is there any way to save a full XHR payload in a cypress run?

We have our cypress suite working fine locally in every machine, environment, location.
We've configured it to work with a Bitbucket pipeline but there is a specific step that consistently fails because of the API call it makes. This API call is made to an external service and we are adding params in the payload that are dynamically built with the request.
Our suspicion is that there are some of these params that are not built correctly when running it from the pipeline (may be related to location, agent, etc) because we are getting "Unauthorized".
So the issue is that we don't have any way to debug this API call from the pipeline and it is the only place where it fails.
So, do you have any suggestions on how to save the XHR Payload in a step in Cypress?
Store it in a mocha report.
Send it via email.
Maybe add it to a log.
Save it as an artifact.
I'm sorry I'm just clueless how to approach this as I'm not an expert in neither cypress nor bitbucket pipelines.
More specifically, I need to debug this call:
As I understand, your external API call URL is known, right? If so, I would suggest for debugging purpose to route this call, then display it out in the cypress running logs, so you will be able to compare the request payloads:
cy.route({ method: 'POST', url: `/ps/users`}).as('routedRequest');
...
cy.get('#routedRequest').then((xhr) => {
cy.log(JSON.stringify(xhr.request))
});

getJSON function get data from url on localhost only

I use below function to get the Continent Code from the api which works fine on localhost but fail in live environment which is website
$.getJSON('//www.geoplugin.net/json.gp?jsoncallback=?', function (data) {
// console.log(JSON.stringify(data, null, 2));
console.log(JSON.stringify(data.geoplugin_continentCode));
});
Warning which i see in Console is
Loading failed for the with source
“https://www.geoplugin.net/json.gp?jsoncallback=jQuery16407901144106031991_1537089290623&_=1537089292750”.
I am not sure why it fails on website https://www.example.com
could SSL version some problem as i am not sure as i tried it on fiddle & it works fine http://jsfiddle.net/om8ahkp3/
UPDATE
Since problem was due to crossdomain issue which as this api used a different url for ssl version. i was not able to use this ssl version as it was not free.
So ended up using another api which had free option also limited to 50k request on monthly basis.
$.ajax({
url: 'https://api.ipgeolocation.io/ipgeo?fields=is_eu& excludes=ip&apiKey=YOURKEY',
dataType: 'json',
success: function (json) {
console.log("json.is_eu " + json.is_eu);
}
});
What is the whole problem?
You want to access to a third site (crossDomain). So, That site decides that you can access to it, or not. When a site provides a service (similar geo service that you have used it), it determines which part of it's services are free.
In your case, if your source site's protocol is http (like as localhost) and dest site (service provider site) is http too, you can access to this geo service with your above code (because this third site allows this now). But if you want to access to this service from a https site (I think you are trying this now) the geoPlugin don't allow you easily or free!
In this cases, the destination sites, provide another urls and define user levels (to getting money for special services.).
In act, if your dest site was for yourself too(which it is not in this case), you could add needed access to specific referer sites, but now...
I look at its site to be sure. You must use this url in this case:
https://ssl.geoplugin.net/json.gp?k=yourAPICode
But this is not all of things! What is k in above url? This site writes:
"For SSL access, an API Key is required to offset certificate prices and costs €12 per year."
I don't know, but if you need it, you should search for free plugins (if exists) or buy it.

Restrict ajax calls from outside of node app

I want to restrict AJAX calls other than my application. I don't want my AJAX request to send response when called from outside my application, even when the url of AJAX call is copied and pasted in browser. I am using node, express and mongodb.
What I did was add some headers in my ajax calls:
// javascript
$.ajax({
type: 'POST',
dataType: 'json',
headers: {"Content-Type": "application/json", "app-ver": 1.5}
})
.
// php
$client_ver = floatval($_SERVER['HTTP_APP_VER']);
// I also check $_SERVER['CONTENT_TYPE'] to be: 'application/json'
// and post method. (ask me I'll add)
if ( $client_ver != 1.5 )
{
echo '{}';
exit();
}
Edit:
Another way would be creating a "session" with your app, meaning every request after a successful login has to carry a token (at least)
and have your app attach lots of unique header key-values that is only recognizable by your server, and not any two requests are a like (eg: include time in header, and hash it with REST command, and have server check it to make sure it's a unique command, and it's not just being manipulated.
* You have to know your session creation algorithm and it HAS to be unique, or if you don't know how to do it, I suggest research more about it.
If your application runs on a user's machine (like a JavaScript web app) then it is impossible to prevent someone from making the calls from outside of your app. You can make it harder by using a nonce with each request, but even then someone could build their own app that mimics your app and makes the calls in the correct order.
In short: you can't ever fully trust the client and you can't prevent someone from spoofing the client. You need to re-examine your requirements.

Server stops responding to requests after too many in a session?

I have web app that uses frequent $.ajax() calls to transmit data to and from the server. This runs locally between a virtual machine host and client.
The problem I'm having is that it seems to cut out after making certain number of consecutive calls in a session (no actual number has been determined). This is can be seconds or minutes.
I tried assigning my $.ajax() calls to objects so they could be deleted, eg.:
myApp.ajaxRegistry.myAjax = $.ajax({
url: '/path/to/server',
error: function() {
delete myApp.ajaxRegistry.myAjax;
}
success: function() {
delete myApp.ajaxRegistry.myAjax;
}
});
I thought that may have improved it, but it could just be coincidence. It still fails frequently.
I've monitored the server access log when these failures occur, I can see that it's not even making the request. There are no Javascript errors in the browser console.
EDIT
The browser's network logger indicates that it is making the request, but server is not responding (according to apache's access log). After a few minutes, it starts responding again, so I'm thinking there is configuration on the server.
It might also be worth noting that the virtual machine server frequently loses time (some sort of annoying VirtualBox "feature"), so I wonder if that might be related.
UPDATE
I think my hunch about the server time may have been right. I finally managed to get ntp to work properly on the VM and I haven't encountered this problem for a few weeks now.
Just to have the answer in a separate post: the server time needs to be accurate (at least in this context) or the AJAX requests get confused.

Categories