Server stops responding to requests after too many in a session? - javascript

I have web app that uses frequent $.ajax() calls to transmit data to and from the server. This runs locally between a virtual machine host and client.
The problem I'm having is that it seems to cut out after making certain number of consecutive calls in a session (no actual number has been determined). This is can be seconds or minutes.
I tried assigning my $.ajax() calls to objects so they could be deleted, eg.:
myApp.ajaxRegistry.myAjax = $.ajax({
url: '/path/to/server',
error: function() {
delete myApp.ajaxRegistry.myAjax;
}
success: function() {
delete myApp.ajaxRegistry.myAjax;
}
});
I thought that may have improved it, but it could just be coincidence. It still fails frequently.
I've monitored the server access log when these failures occur, I can see that it's not even making the request. There are no Javascript errors in the browser console.
EDIT
The browser's network logger indicates that it is making the request, but server is not responding (according to apache's access log). After a few minutes, it starts responding again, so I'm thinking there is configuration on the server.
It might also be worth noting that the virtual machine server frequently loses time (some sort of annoying VirtualBox "feature"), so I wonder if that might be related.
UPDATE
I think my hunch about the server time may have been right. I finally managed to get ntp to work properly on the VM and I haven't encountered this problem for a few weeks now.

Just to have the answer in a separate post: the server time needs to be accurate (at least in this context) or the AJAX requests get confused.

Related

Axios GET request returning ECONNRESET for some URLs

I am working on an application that uses an express server to reach out to an API to fetch data. In our organisation outbound traffic requires a proxy which I have supplier to axios like below (not the real one):
let response = await axios.get(endpointUrl, {
proxy: {
host: "123.45.678.90",
port: 0000,
},
})
Passing various URLs into the axios get function returns varied results, with the following URLs returning a result:
https://www.boredapi.com/api/activity
https://api.ipify.org?format=json
https://jsonplaceholder.typicode.com/todos/1
Whereas the following URLs are returning an ECONNRESET error almost instantly:
https://api.publicapis.org/entries
https://randomuser.me/api/
https://reqres.in/api/users
I can't see any pattern between the URLs that are/are not working so wondered if a fresh set of eyes could spot the trait in them? It's important to note that all these URLs return successfully in the browser, just through this axios call being the problem.
To add to the mystery, the URLs that do work work on my machine, do work on a machine outside our organisation - so potentially a clue there?
Any help/guidance of course would be appreciated, thank you.
This error simply means that the other party closed the connection in a way that was probably not normal (or perhaps in a hurry).
For example, a socket connection may be closed by the other party abruptly for various reasons or you may have lost your wifi signal while running your application. You will then see this error/exception on your end.
What could also be the case: at random times, the other side is overloaded and simply kills the connection as a result. If that's the case, depends on what you're connecting to exactly…
Solution - This is happening because you are not listening to/handling the 'error' event. To fix this, you need to implement a listener that can handle such errors.
If the URL that work on your machine also work outside your organization and the other don't, it is most likely a problem with your proxy.
Some proxies might have configurations that makes them remove headers or change the request in a way that the target does not receive it as intended.
I also encountered a problem with axios and proxies once. I had to switch libs to make it work. To be sure, I would recommand using a lib like "request" (deprecated) juste to make sure it is not a problem with axios. There are multiple open issues on the axios repository for proxy issues.
ECONNRESET is likely occurring either because the proxy runs into some sort of error and drops the connection or the target host finds something wrong with the incoming connection and decides to immediately drop it.
That target host may either be finding a problem because of the proxy or it may be expecting something in the request that it finds is missing.
Since you have evidence that all the requests work fine when running from a different location (not through your proxy) and I can confirm that your code works fine from my location (also not running through your proxy), it definitely seems like the evidence points at your proxy as causing some problem in some requests.
One way to debug proxy issues like this is to run a request through the proxy that ends up going to some server you can debug on and see exactly what the proxy has done to the incoming request, compared to a request to that same host that doesn't go through the proxy. That will hopefully highlight some difference that you can then test to see if that's causing the problem and then eventually work on the configuration of the proxy to correct.

Tomcat, service unavailable 503

My webapp uses JSP / JavaScript/ google visualization, and runs on Tomcat 7 on a 64bit windows server with enough resources dedicated to this app.It is still under testing, so, I have control over the load.
The problem is when I work from device at same network of the server, everything works fine. But when I work from device from different network with a request took a long time (more than 6 minutes) I get Service Unavailable [503] message after 6 minutes of waiting while processing in the server is going on and completed successfully. I checked the Tomcat logs but i couldn't find any thing every thing seems to be work fine. I tried different solutions but non of them worked with me:
Increase Tomcat's connector timeout.
Increase the Tomcat RAM.
Disable the server firewall
Try different browsers
Adjust the request timeout from the browser.
I experimented by setting Tomcat's Connector properties in conf/server.xml. I played around with all combinations and ranges of connectionTimeout and keepAliveTimeout.
The final configuration is:
<Connector port="80" protocol="HTTP/1.1"
address="0.0.0.0"
connectionTimeout="3600000"
redirectPort="8443" />
I'm wondering if anybody else has run into such a problem, and how they solved it.
I think you server.xml is having wrong data . Change connector port from 80 to 8080 it always allow four digit and start from 8080 not sure . please update as below
<Connector port="8080" protocol="HTTP/1.1"
address="0.0.0.0"
connectionTimeout="3600000"
redirectPort="8443" />
503 Service Unavailable
The server is currently unable to handle the request due to a temporary overloading or maintenance of the server. The implication is that this is a temporary condition which will be alleviated after some delay. If known, the length of the delay MAY be indicated in a Retry-After header. If no Retry-After is given, the client SHOULD handle the response as it would for a 500 response.
Note: The existence of the 503 status code does not imply that a
server must use it when becoming overloaded. Some servers may wish
to simply refuse the connection.click here for more information
let me know if you face any issue

slidingExpiration doesn't seem with work with ASP.NET MVC APIs when working with SPAs

My users, when on a SPA page, are getting logged out after a couple of hours. Though, if they use the older postback forms, they never time out. So you have context, I have included enough code to provide context for the description of the issue on the bottom.
Web.config for authentication
<authentication mode="Forms">
<forms loginUrl="~/Account/Login" timeout="480" slidingExpiration="true" defaultUrl="~" ticketCompatibilityMode="Framework40"/>
</authentication>
My api controller
namespace my.Controllers
{
public class ApiMotionController : ApiController
{
[Authorize(Roles = "Mover"]
public IQueryable<Motions> Get()
JavaScript code
(function () {
'use strict';
angular.module('app')
.controller('MotionManager', ['$scope', '$http', buildMotionManager]);
function buildMotionManager($scope, $http) {
/*Static Members*/
$scope._whoami = 'MotionManager'; //Used for troubleshooting controller
/*Initialization Code*/
getMotions($scope, $http)();
/*Scope methods*/
$scope.refreshMotionsList = getMotions($scope, $http);
$scope.addMotion = addMotion($scope, $http);
$scope.playMotion = playMotion($scope, $http);
}
function getMotions($scope, $http){
return function(){
$http.get('/api/getMotions')
.succeed(function(data){
$scope.motionList = data;
})
.error(function(data){
console.log('FAIL', data);
});
};
}
function addMotion($scope, $http){
//stub. Code not shown here.
};
function playMotion($scope, $http){
//stub. Code not shown here.
};
})();
There my be typos in the above code, since I retyped it from my original while sanitizing.
The code does work as expected, but the problem is that after hours of working, suddenly all web API calls are failing with a 401 error. That is, they are all acting like the user is now de-authenticated.
As above, I cannot duplicate this issue when I am using web forms, or even MVC forms, and re-posting whole pages. It is only when I am using SPA style coding. I haven't tried other SPA frameworks, since I have 6 months of angular directed code in this project, switching isn't an option.
I have considered putting an iframe, with a timer to fire off in the background against a form object, just to trick the browser into generating a proper form postback. I want to avoid doing that, because it seems to hacky.
The only other key issue I have found is that I am seeing a bunch of schannel errors being logged into my application event log on the IIS server. They are all 10,10 which isn't well documented. The 10 series is well documented outside of 10,10. But none of those suggestions seem to work, or are even relevant.
Server is IIS 7.5 and I have tried this on IIS 8.
Application Log Errors:
A fatal alert was generated and sent to the remote endpoint. This may result in termination of the connection. The TLS protocol defined fatal error code is 10. The Windows SChannel error state is 10.
Error State: 10, Alert Description: 10
A fatal alert was generated and sent to the remote endpoint. This may result in termination of the connection. The TLS protocol defined fatal error code is 40. The Windows SChannel error state is 1205.
An TLS 1.2 connection request was received from a remote client application, but none of the cipher suites supported by the client application are supported by the server. The SSL connection request has failed.
Discovery
Error Code 40 means that there is a handshake issue. Since State Management is custom for my platform, I decided to change it to inproc. So far, I have seen the error log reduce in new error frequency, but disappear. However, I am still testing for the 401 issue.
Post discovery follow up
Had the certs re-issued, and the schannel errors cleared, but the problem remained.
I had started exploring the header information with a fine tooth comb, even if it means that I had to add custom header information to accompany my server calls.
I have now included in all $http calls withCredentials: true, which has brought my failure rate down to around 15%. that means that the failures are down to once or twice a day.
I started watching my 'auth' cookie on the client, and something confusing happens occasionally. The cookie will change without prompt, then it has changed back. Almost like the session is bouncing from current, to a new one, then back to current. So I have killed my cleanup process on the session table on the server, and see what I am getting there.
I had also been checking the system logs for exceptions, or SQL timeouts, and nothing.
Started to convert all controllers to MVC controllers, but I have hit conversion problems after conversion problems, including the use of jSON serializer. I still don't understand the decision to stick with the MS serializer when the JSON.NET one work so much better.
Current Status
The last change I made was to add filters.Add(new AuthorizeAttribute()); to my FilterConfig.RegisterGlobalFilters function.
Everything is still failing. After investigating the IIS logs I am still seeing everything getting de-authenticated.
FF on Windows - Fail
Chrome on Windows - Fail
Chrome on Droid - Fail
Safari on iPad - Fail
IE on Windows - Fail
12/10 Discovery
I have found the real problem. The authentication in MVC controllers are just not compatible with the web API controllers. So when I authenticate with the MVC controller, the web API controllers basically ignore it, and eventually time out on the authentication.
Latest Discovery
Apparently when the asp.net worker process shut down, and restarted, it would get a false flag that the database schema didn't exists. So I removed the check, and all reads and writes started working fine. It is interesting that the api controller would forge a new cookie when the mvc controller would fail the authentication. It was like it was creating a new provider instance. However, I couldn't find a 2nd instance, so I have to assume the existing provider was being duplicated.
Fix that is being tested
Now that I have removed the DB test, I am now testing the issue in long run tests. Each long run is longer than the worker process stays alive, but shorter than the session timeout.
Cornerstone of finding this bug
Apparently IIS Express was hiding the bug in that it seems to act without an external worker process. So I moved the test environment to my local IIS server.
It looks like there are several issues that were causing my problem, each one broken down here:
IIS Express wasn't closing sessions the same way that full IIS would.
So I moved the application to my local IIS, and added logging to everything.
ASP.NET worker process would launch new provider instance every time the API Controllers were called.
This would cause a new schema check per call. MVC controllers would only cause this check once per initial launch.
Since my provider is marries to my application schema, I just disabled the schema check.
Angular must be told to marshal the cookies.
So I added: cfg: { withCredentials: true, responseType: "json" }
the response type was to cover the occasional issue where I would see 'text/text'. Now I always see 'application/json'. This seems to be a browser issue, mostly with IE.
I also had to add config.MapHttpAttributeRoutes(); to my register method of my WebApiConfig class.
Using all of this, I was able to discover that the core of the problem was that every api call was causing my security provider to re-test the schema, which my MVC controllers are set to suppress that test after first load. The test always fails, because I had to expand a couple of tables, but I didn't need the models changed.
Resolution: I removed the test from the provider. Since the provider is strongly tied into the rest of the application, it didn't seem logical to keep treating it as a typical ASP.NET Membership provider. And that was the top feature that I didn't need.
Second benefit, I gained back a little bit of performance.

How would you create an auto-updating newsfeed without a reload?

How would I go around creating an auto-updating newsfeed? I was going to use NodeJS, but someone told me that wouldn't work when I got into the thousands of users. Right now, I have it so that you can post text to the newsfeed, and it will save into a mysql database. Then, whenever you load the page, it will display all the posts from that database. The problem with this is that you have to reload the page everytime there is an update. I was going to use this to tell the nodejs server someone posted an update...
index.html
function sendPost(name,cont) {
socket.emit("newPost", name, cont);
}
app.js
socket.on("newPost", function (name,cont) {
/* Adding the post to a database
* Then calling an event to say a new post was created
* and emit a new signal with the new data */
});
But that won't work for a ton of people. Does anyone have any suggestions for where I should start, the api's and/or programs I would need to use?
You're on the right track. Build a route on your Node webserver that will cause it to fetch a newspost and broadcast to all connected clients. Then, just fire the request to Node.
On the Node-to-client front, you'll need to learn how to do long polling. It's rather easy - you let a client connect and do not end the response until a message goes through to it. You handle this through event handlers (Postal.JS is worth picking up for this).
The AJAX part is straightforward. $.get("your/node/url").then(function(d) { }); works out of the box. When it comes back (either success or failure), relaunch it. Set its timeout to 60 seconds or so, and end the response on the node front the moment one event targetted it.
This is how most sites do it. The problem with websockets is that, right now, they're a bit of a black sheep due to old IE versions not supporting them. Consider long polling instead if you can afford it.
(Psst. Whoever told you that Node wouldn't work in the thousands of users is talking through their asses. If anything, Node is more adapted to large concurrency than PHP due to the fact that a connection on Node takes almost nothing to keep alive due to the event-driven nature of Node. Don't listen to naysayers.)

use javascript to test if server is reachable

I have a javascript application that loads data asynchronously from a server within my company's firewall. We sometimes need to show this application outside of the firewall but it is not always possible to get on the VPN. For this reason, we configured the firewall to allow public requests for this data on a specific port. The catch: when I am inside the corporate network, I can only load data using the server's DNS name due to a peculiar configuration of the firewall. Outside of the corp network, I have to use the server's public IP address..
Currently we distribute two versions of the javascript application -- one which works within the firewall and one that works outside of it. We would like to distribute one app -- one that tests the URLs and then, depending on which is reachable, continue to use that address to load data.
I have been using jQuery's $.ajax() call to load the data and I noticed there is a timeout parameter. I thought I could use a short timeout to determine which server is unreachable.. However, this "countdown" doesn't seem to start until the initial connection to the server is made.
Any thoughts on how to determine, in javascript, which of two servers is reachable?
Use the error event:
$.ajax({
url: dnsUrl,
success: ... // Normal operation
error: function () {
$.ajax({
url: ipUrl,
success: ... // Normal operation
});
}
});
You may put some dummy-images on the server and try to load them. the onload-event of the image that was successfully loaded should fire.

Categories