$ajax webservice failing on webfarm scenario - javascript

I am working on asp.net web form project.
In particular scenario I want to set some session values when user check/uncheck checkboxes in javascript.
In JavaScript session is not accessible so I developed a web service and gives call the web service method and that web method will going to set the values to session.
here is my js web service call.
$.ajax({
async: false,
url: baseUrl + '/' + "WebServices/ExtraInfoWebService.asmx/MyWebMethod",
data: { hdnValue: $("[id$='hdnCCSarray']").val() },
success: gett
});
This webservice call works perfectly on development machine having single IIS server but fails on production environment where multiple IIS boxes. I observed carefully and found that this webservice call is not working on IE browser only.
Anyone has suggestion on this please let me know.
Thanks in Advance!!!

If you are running in a Web Farm this means that you cannot use the default session store mode (InProc) because the different nodes of the farm will not be able to synchronize the session values. You will need to use an out-of-process session mode. There are 2 available:
StateServer
SQLServer
You can read more about the different session state modes in this MSDN article.
Or even better: refactor your code so that it doesn't rely on any session at all. I find it a very bad design to have a web service which is not stateless.

Related

getJSON function get data from url on localhost only

I use below function to get the Continent Code from the api which works fine on localhost but fail in live environment which is website
$.getJSON('//www.geoplugin.net/json.gp?jsoncallback=?', function (data) {
// console.log(JSON.stringify(data, null, 2));
console.log(JSON.stringify(data.geoplugin_continentCode));
});
Warning which i see in Console is
Loading failed for the with source
“https://www.geoplugin.net/json.gp?jsoncallback=jQuery16407901144106031991_1537089290623&_=1537089292750”.
I am not sure why it fails on website https://www.example.com
could SSL version some problem as i am not sure as i tried it on fiddle & it works fine http://jsfiddle.net/om8ahkp3/
UPDATE
Since problem was due to crossdomain issue which as this api used a different url for ssl version. i was not able to use this ssl version as it was not free.
So ended up using another api which had free option also limited to 50k request on monthly basis.
$.ajax({
url: 'https://api.ipgeolocation.io/ipgeo?fields=is_eu& excludes=ip&apiKey=YOURKEY',
dataType: 'json',
success: function (json) {
console.log("json.is_eu " + json.is_eu);
}
});
What is the whole problem?
You want to access to a third site (crossDomain). So, That site decides that you can access to it, or not. When a site provides a service (similar geo service that you have used it), it determines which part of it's services are free.
In your case, if your source site's protocol is http (like as localhost) and dest site (service provider site) is http too, you can access to this geo service with your above code (because this third site allows this now). But if you want to access to this service from a https site (I think you are trying this now) the geoPlugin don't allow you easily or free!
In this cases, the destination sites, provide another urls and define user levels (to getting money for special services.).
In act, if your dest site was for yourself too(which it is not in this case), you could add needed access to specific referer sites, but now...
I look at its site to be sure. You must use this url in this case:
https://ssl.geoplugin.net/json.gp?k=yourAPICode
But this is not all of things! What is k in above url? This site writes:
"For SSL access, an API Key is required to offset certificate prices and costs €12 per year."
I don't know, but if you need it, you should search for free plugins (if exists) or buy it.

How to prevent passing HTML5 local storage data between two or more different applications which were deployed at the same container?

i have a big deal with user session monitoring.
I need to kill session after closing last application tab or after browser closing.
So, this work was performed. I have developed small library for working with local storage and session storage and i developed mechanism for monitoring of opened browser tabs.
Just simple object with tab counter.
{
"session_instance_count" : 0
}
And simple methods for writing this object to localstorage:
SessionMonitor.prototype.writeValueByKeyToLS = function (key, value){
var own = this;
own.getLocalStorageInstance().setItem(key, value);
};
SessionMonitor.prototype.getLocalStorageInstance = function () {
return 'localStorage' in window && window['localStorage'];
};
But after deploying another application to Tomcat i have found serious troubles with local storage.
All stored values from first application were available in second application.
I stored some data on http://localhost:8080/app1 this data will be available on http://localhost:8080/app2
App1 sending request to open App2 with some parameters
Note: I do not have access to modify source code of second application.
This is my question:
How to prevent passing HTML5 local storage data between two or more different applications which were deployed at the same container?
Solution with local storage was removed, because it's not provide possibilities for set data separately between applications inside one container with same domain pattern.
I have checked solution with port configuring inside tomcat/conf/server.xml
So we can set different ports to all deployed application. Or we can register one more port for applications
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
<Connector port="8086" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8444" />
http://localhost:8080/app2 with http://localhost:8086/app1
Local storages will be unique.
But i can't use this solution, because customer don't want this.
I found one more solution: relation between server side and client side.
Just generating unique window.name and storing it inside server session.
I hope, that it will be helpfull for somebody.

slidingExpiration doesn't seem with work with ASP.NET MVC APIs when working with SPAs

My users, when on a SPA page, are getting logged out after a couple of hours. Though, if they use the older postback forms, they never time out. So you have context, I have included enough code to provide context for the description of the issue on the bottom.
Web.config for authentication
<authentication mode="Forms">
<forms loginUrl="~/Account/Login" timeout="480" slidingExpiration="true" defaultUrl="~" ticketCompatibilityMode="Framework40"/>
</authentication>
My api controller
namespace my.Controllers
{
public class ApiMotionController : ApiController
{
[Authorize(Roles = "Mover"]
public IQueryable<Motions> Get()
JavaScript code
(function () {
'use strict';
angular.module('app')
.controller('MotionManager', ['$scope', '$http', buildMotionManager]);
function buildMotionManager($scope, $http) {
/*Static Members*/
$scope._whoami = 'MotionManager'; //Used for troubleshooting controller
/*Initialization Code*/
getMotions($scope, $http)();
/*Scope methods*/
$scope.refreshMotionsList = getMotions($scope, $http);
$scope.addMotion = addMotion($scope, $http);
$scope.playMotion = playMotion($scope, $http);
}
function getMotions($scope, $http){
return function(){
$http.get('/api/getMotions')
.succeed(function(data){
$scope.motionList = data;
})
.error(function(data){
console.log('FAIL', data);
});
};
}
function addMotion($scope, $http){
//stub. Code not shown here.
};
function playMotion($scope, $http){
//stub. Code not shown here.
};
})();
There my be typos in the above code, since I retyped it from my original while sanitizing.
The code does work as expected, but the problem is that after hours of working, suddenly all web API calls are failing with a 401 error. That is, they are all acting like the user is now de-authenticated.
As above, I cannot duplicate this issue when I am using web forms, or even MVC forms, and re-posting whole pages. It is only when I am using SPA style coding. I haven't tried other SPA frameworks, since I have 6 months of angular directed code in this project, switching isn't an option.
I have considered putting an iframe, with a timer to fire off in the background against a form object, just to trick the browser into generating a proper form postback. I want to avoid doing that, because it seems to hacky.
The only other key issue I have found is that I am seeing a bunch of schannel errors being logged into my application event log on the IIS server. They are all 10,10 which isn't well documented. The 10 series is well documented outside of 10,10. But none of those suggestions seem to work, or are even relevant.
Server is IIS 7.5 and I have tried this on IIS 8.
Application Log Errors:
A fatal alert was generated and sent to the remote endpoint. This may result in termination of the connection. The TLS protocol defined fatal error code is 10. The Windows SChannel error state is 10.
Error State: 10, Alert Description: 10
A fatal alert was generated and sent to the remote endpoint. This may result in termination of the connection. The TLS protocol defined fatal error code is 40. The Windows SChannel error state is 1205.
An TLS 1.2 connection request was received from a remote client application, but none of the cipher suites supported by the client application are supported by the server. The SSL connection request has failed.
Discovery
Error Code 40 means that there is a handshake issue. Since State Management is custom for my platform, I decided to change it to inproc. So far, I have seen the error log reduce in new error frequency, but disappear. However, I am still testing for the 401 issue.
Post discovery follow up
Had the certs re-issued, and the schannel errors cleared, but the problem remained.
I had started exploring the header information with a fine tooth comb, even if it means that I had to add custom header information to accompany my server calls.
I have now included in all $http calls withCredentials: true, which has brought my failure rate down to around 15%. that means that the failures are down to once or twice a day.
I started watching my 'auth' cookie on the client, and something confusing happens occasionally. The cookie will change without prompt, then it has changed back. Almost like the session is bouncing from current, to a new one, then back to current. So I have killed my cleanup process on the session table on the server, and see what I am getting there.
I had also been checking the system logs for exceptions, or SQL timeouts, and nothing.
Started to convert all controllers to MVC controllers, but I have hit conversion problems after conversion problems, including the use of jSON serializer. I still don't understand the decision to stick with the MS serializer when the JSON.NET one work so much better.
Current Status
The last change I made was to add filters.Add(new AuthorizeAttribute()); to my FilterConfig.RegisterGlobalFilters function.
Everything is still failing. After investigating the IIS logs I am still seeing everything getting de-authenticated.
FF on Windows - Fail
Chrome on Windows - Fail
Chrome on Droid - Fail
Safari on iPad - Fail
IE on Windows - Fail
12/10 Discovery
I have found the real problem. The authentication in MVC controllers are just not compatible with the web API controllers. So when I authenticate with the MVC controller, the web API controllers basically ignore it, and eventually time out on the authentication.
Latest Discovery
Apparently when the asp.net worker process shut down, and restarted, it would get a false flag that the database schema didn't exists. So I removed the check, and all reads and writes started working fine. It is interesting that the api controller would forge a new cookie when the mvc controller would fail the authentication. It was like it was creating a new provider instance. However, I couldn't find a 2nd instance, so I have to assume the existing provider was being duplicated.
Fix that is being tested
Now that I have removed the DB test, I am now testing the issue in long run tests. Each long run is longer than the worker process stays alive, but shorter than the session timeout.
Cornerstone of finding this bug
Apparently IIS Express was hiding the bug in that it seems to act without an external worker process. So I moved the test environment to my local IIS server.
It looks like there are several issues that were causing my problem, each one broken down here:
IIS Express wasn't closing sessions the same way that full IIS would.
So I moved the application to my local IIS, and added logging to everything.
ASP.NET worker process would launch new provider instance every time the API Controllers were called.
This would cause a new schema check per call. MVC controllers would only cause this check once per initial launch.
Since my provider is marries to my application schema, I just disabled the schema check.
Angular must be told to marshal the cookies.
So I added: cfg: { withCredentials: true, responseType: "json" }
the response type was to cover the occasional issue where I would see 'text/text'. Now I always see 'application/json'. This seems to be a browser issue, mostly with IE.
I also had to add config.MapHttpAttributeRoutes(); to my register method of my WebApiConfig class.
Using all of this, I was able to discover that the core of the problem was that every api call was causing my security provider to re-test the schema, which my MVC controllers are set to suppress that test after first load. The test always fails, because I had to expand a couple of tables, but I didn't need the models changed.
Resolution: I removed the test from the provider. Since the provider is strongly tied into the rest of the application, it didn't seem logical to keep treating it as a typical ASP.NET Membership provider. And that was the top feature that I didn't need.
Second benefit, I gained back a little bit of performance.

Authenticating and fetching data from couchdb using jQuery

I have a web app served by cherrypy. Within this app, I would like to fetch some data from a couchdb server, preferably using jquery. I am having trouble to authenticate into the server. When using:
$.couch.login({
name: 'usename',
password: 'password',
success: function() {
console.log('Ready!');
}
});
It sends the login request to the cherrypy server, not the couchdb. According to this, I can use jquery.ajax settings and therefore I have tried using:
$.couch.login({
url: 'http://127.0.0.1:5984',
name: 'usename',
password: 'password',
success: function() {
console.log('Ready!');
}
});
but it does not seem to work.
Any ideas? In addition, can anybody point me to good tutorial or simple web app developed in a similar fashion, i.e. a "standard" web page (not a couchapp), which contains jquery that gets info from couch.
What you are currently doing is telling jquery.couch.js to login against that url. (It needs to POST to /_session)
I believe you need to set up the urlPrefix property on $.couch.
$.couch.urlPrefix = "http://localhost:5984/"; // run this before anything else with $.couch
Don't forget that inside a browser, JavaScript enforces the same origin policy. Since the HTML page is presumably not being loaded from port 5984, you'll have figure out some clever way around it, such as CORS or mod_proxy.

use javascript to test if server is reachable

I have a javascript application that loads data asynchronously from a server within my company's firewall. We sometimes need to show this application outside of the firewall but it is not always possible to get on the VPN. For this reason, we configured the firewall to allow public requests for this data on a specific port. The catch: when I am inside the corporate network, I can only load data using the server's DNS name due to a peculiar configuration of the firewall. Outside of the corp network, I have to use the server's public IP address..
Currently we distribute two versions of the javascript application -- one which works within the firewall and one that works outside of it. We would like to distribute one app -- one that tests the URLs and then, depending on which is reachable, continue to use that address to load data.
I have been using jQuery's $.ajax() call to load the data and I noticed there is a timeout parameter. I thought I could use a short timeout to determine which server is unreachable.. However, this "countdown" doesn't seem to start until the initial connection to the server is made.
Any thoughts on how to determine, in javascript, which of two servers is reachable?
Use the error event:
$.ajax({
url: dnsUrl,
success: ... // Normal operation
error: function () {
$.ajax({
url: ipUrl,
success: ... // Normal operation
});
}
});
You may put some dummy-images on the server and try to load them. the onload-event of the image that was successfully loaded should fire.

Categories