For a special project I chose to write my very own tiny web server which is actually quite easy with the .Net Framework (HttpListener class).
Everything was working beautifully until yesterday when I started to test with Internet Explorer.
My server kept crashing on any ie request ! I finally managed to figure the reason out : ie litterally stutters ! (well at least ie7 and ie8, I've not tried other versions yet).
The problem never occurs with FireFox nor Chrome nor Opera.
To be more specific, I'm using dynamic javascript insertion in the web page to get around the same origin policy in a cross-domain call scenario. This script generates a request to my tiny server. Let's say the request built is :
http://localhost:8081/myService?p1=A,p2=B,p3=C,p4=D,p5=E
With FFox, Chrome or Opera, my server indeed receives a single request from the browser :
http://localhost:8081/myService?p1=A,p2=B,p3=C,p4=D,p5=E
with ie, my server receives a random amount of partial requests like :
http://localhost:8081/myService?p1=A,
http://localhost:8081/myService?p1=A,p2=B,
http://localhost:8081/myService?p1=A,p2=B,p3=C,
http://localhost:8081/myService?p1=A,p2=B,p3=C,p4=D,
(and in fact a lot of them since I have 20 parameters or so...)
and finally :
http://localhost:8081/myService?p1=A,p2=B,p3=C,p4=D,p5=E
This is bascically not big deal as my logic is able to handle the lacking parameters situation. The problem is that each incomplete request actually closes the connection (error 1229). That was crashing the server which was trying to answer to each request. The fix was easy but I still don't like the fact that the server is flooded with those intermediate "unanswerable" and thus unuseful requests.
(Moreover, the request sometimes looks like :
http://localhost:8081/myService?p1=A,p2=B,p3=C,?p1=A,p2=B,p3=C,p4=D,p5=E
!!! )
I traced the javascript : the generation function is called only once (whatever browser) so this really sounds like an ie issue.
Does anyone have an idea to prevent this behaviour from ie ?
Related
I have been stuck on this for a couple of weeks now and this is a follow on from SO question Delphi REST Debugger Returns Error 429 Too Many Requests but Browser Returns JSON as Expected
I was wanting to get the content of a url response using the TNetHTTPRequest and TNetHTTPClient components. I was continually getting 429 errors “too many requests”. When using Firefox Inspect Element to look at network and storage, I discovered that I needed to receive cookies and then send those cookies with my request. Unfortunately, one of the cookies essential to the website content seems to be dependent (I think) on the execution of javascript. I went back to first principles and dropped a TWebbrowser on a form (VCL) and sure enough browser shows a javascript error “Expected Identifier”.
When I use the TWebbrowser in FMX it does not throw an error it just does not return the website contents at all and remains blank. I need FMX as I will be in a cross platform mobile environment.
The URL is https://shop.coles.com.au/a/national/home
I use Delphi Community Edition 10.3.3 Rio.
The URL returns perfectly in commercial browsers Firefox, Safari, Chrome and even CEF4Delphi. Unfortunately, I can’t use CEF as I need cross platform.
I would like to know how to get the website content returned to the browser (or even better NetHTTPClient) without script errors and how to access the browsers current cookies.
Any help will be most appreciated.
Thanks,
John.
URL returns perfectly in commercial browsers ... without script errors and how to access the browsers current cookies
If you'd inspect the network traffic (F12 > Network, then requesting your URL) or use uMatrix (to block everything that doesn't belong to the domain by default) you'd see the JS does at least one XHR to amazonaws.com. Your HTTP transfer alone (as done by TNetHTTP*) works fine and you get the same resource that each internet browser gets.
However, you don't operate with what you got (in contrast to the internet browser, which also automatically parses the HTML, sees JS resources, and executes them). TWebbrowser does not what you take for granted most likely due to security settings (try to get an error console in there, preferably F12 again). You need to do the same: parse the HTML resource for JS URIs, requesting those and executing what you get, while still providing the same cookie environment.
For executing JS you could use Chakra or mORMot or BESEN. It's challenging at first, but the more you understand about HTTP (including cookies) and a JS engine, the more you'll see why "things work" in one situation and not in another. There's a reason why an internet browser is a very complex software and not just a downloader.
As per this forcing IE11 Quirks mode might cure your problem already when using TWebBrowser:
TBrowserEmulationAdjuster.SetBrowserEmulationDWORD(TBrowserEmulationAdjuster.IE11_Quirks);
I have had this unusual problem with my Spring rest server for a couple of months where performance with Firefox (52.0.1 32-bit and 64-bit) is much much worse than with Internet Explorer (9-11) when receiving large amounts of JSON data.
The server that is hosting my service is a Windows Server Enterprise 2007 SP2.
If I could, I would update it, but that is not allowable.
To further define my problem, making a request to an endpoint to retrieve the JSON data results in an almost immediate response in Firefox, but it is stuck in "Receiving" for sometimes in upwards of 90 seconds. Internet Explorer typically is done in 2-3 seconds. (See photo below of network output from FF)
I have a feeling that this may be a server problem because previous to this service being hosted on this server, it was being hosted from a linux vm while we were deving and testing the service. During that time the response time with FF and IE were both the same and very quick (in fact the same response time as IE has now).
I tagged javascript in this not to tag spam, but because when the data comes back it does go through some rigorous transformations in the JS and I'm not sure if that is contributing to the problem. (This is doubtful though just because if IE can handle it FF should being that Mozilla deved JS)
My question is: Would you say this problem is related to the server, JS implementation, or something else? and what might I do to solve this problem?
I have a consistent extremely long (2min+) blocking call on mac/chrome. The same set of steps work just fine on other operating systems or browsers. (And even some other macs.) The site isn't super chatty, and there aren't any other requests that take anywhere over 2 seconds.
The blocking PUT call almost always follows a DELETE call to a different url. (Same server.) According to my console logs, the server actually receives and returns the results of the PUT call very quickly. So Chrome thinks it is blocked but it is actually already processed!
Any ideas?
Following up on this post: Chrome and Firefox CORS AJAX calls get aborted on some Mac machines
This mac had a DISABLED Sophos antivirus on the machine. Uninstalling it completely fixed this issue.
ARGGGGG.
Hopefully someone else finds this post and is able to spend less than the hours we spent on it.
I'm trying to troubleshoot a problem on a client's machine for our website. We're using an ajax call to get information on a page to select additional parameters. Our callback function has a block of code for reading when the ajax coming back is an error or is correct. For every other computer that we've tested this with, the ajax comes back. However, for a particular client, we're seeing the ajax come back with the error message, meaning the response never got there successfully or that it's corrupted or broken.
Does anyone know how this would happen? The client is using IE 8 and I've tested IE 8, IE 9, IE 10, and Chrome and all of those work on my computers.
EDIT: As of now, we don't have access to the system and the network that would be causing the error. They are trying to see if they can accept everything from our domain and see if that fixes it, but right now, I can't put Fiddler on their computer.
I've seen any amount of random behaviour caused by virus scanners and so-called network security products. Try adding an exception for your site to any security software your client is running.
The other thing to do is to use Wireshark, Fiddler, etc. to see what's actually happening at the network level.
The setup is as follows:
Firefox (both 3.x and 4b) with properly set up and working certificates, including a client certificate.
Web page with an XMLHttpRequest() type of AJAX call to a different subdomain.
Custom web server in said subdomain accepting requests, reponding with a permissive Access-Control-Allow-Origin header and requiring client verification.
The problem is that Firefox aborts the request (well, that's what it says in firebug anyway) abruptly. Running the setup with openssl s_server instead hints that Firefox actually doesn't even send the client certificate:
140727260153512:error:140890C7:SSL routines:SSL3_GET_CLIENT_CERTIFICATE:peer
did not return a certificate:s3_srvr.c:2965:ACCEPT
The same exact setup works perfectly with Chrome, suggesting perhaps a bug in Firefox. However, performing the ajax call with a <script> element injected into the DOM seems to work as intended...
So, has anyone else run into this? Is it a bug? Any workarounds? Is there something obvious missing?
Chiming in 5 years later probably isn't much help to the OP, but in case someone else has this issue in the future...
Firefox appears to not send the client certificate with a cross-origin XHR request by default. Setting withCredentials=true on the XHR instance resolved the issue for me. Note that I also did not see this problem with Chrome--only Firefox.
For more info see this Mozilla Dev Network blog post. In particular, the following statement:
By default, in cross-site XMLHttpRequest invocations, browsers will
not send credentials. A specific flag has to be set on the XMLHttpRequest object when it is invoked.
The reason injecting the script works as opposed to a simple XHR request is because of the Single Origin Policy. This would probably explain why Chrome allows the XHR but not FF; Chrome considers the subdomain part of the same origin, but FF does not.
Injecting scripts from other domains (which is what Google Analytics does) is allowed and one of the practices to handle this situation.
The way my team handles this situation is by making a request through a server-side proxy.
I would recommend using a server-side proxy if you can, but the script injection method works fine as long as the code is coming from a trusted source.
I also found this article which describes your situation.