I'm using Express and Node.jx 0.6.7. Server is EC2 Small instance..
For some reason, Node.js is sometimes not receiving my ajax requests (sometimes). Firebug would say "pending..." and the spinning wheel would show.
It would take about 30 seconds before my node.js server actually gets the request. (When I hit it, I console.log, to check).
I've read information that a browser only allows 6 parallel connections. But, in my Firebug, I never have more than 3! In fact, I make sure everything loads, and all my ajax requests loads. After I see that everything has loaded, I "click" to call the AJAX...and it hangs. THis is the only spinning loading wheel. Everything else is loaded...so the simultaneous connections cannot be the problem, right?
The server returns responses very fast--the problem is that it's not receiving the response. Literally, the server does not get the request until like 30 seconds later.
This happens with static images just as well. (basically any request. It's really random)
I'm on Firefox 10 and Chrome (latest stable), and this happens. On Safari, it never happens. It's random and this problem happens in different spots.
Note: this does not happen in my EC2 Micro instance (only on my small instance). They are both the latest versions of Ubuntu.
Screenshot: http://i.imgur.com/X7801.png (as you can see, there is only 1 spinner. Everything else is returned. the Server is NOT under heavy load. it's idle. Yet, the server is not receiving the request)
Note: I am using AWS Load Balancer, but that's not the problem, because I turned that off and it's still happening.
Figured it out.
Cloudflare! I disabled the security crap.
Related
My scenario is a web socket server on Winows Server that various HTML5/Java-Script-clients connect to to play card games. Thousands of small commands are sent back and forth every minute. Everything is absolutely reliable and stable on Windows-Clients and Android-Clients.
If it weren't for Apple and its IOS. Here's what happens on the iPad with IOS15:
For quite a while, sometimes 10 minutes, sometimes an hour, everything works as expected. Then suddenly the onmessage event doesn't fire anymore. However, the connection is still active and data can still be sent to the server. The only option then is to disconnect and reconnect.
It doesn't matter whether ws or wss.
It also doesn't matter whether the frames are sent as text frames or binary frames.
I tried killing and reinstalling the onmessage-Eventhandler, when the event stops firing. Also without success.
I also tried following the tip of setting the papameter NSURLSession to false in the safary settings. No success. Besides, it would also be difficult to recommend to customers.
The idea of ​​disconnecting and reconnecting in the event of an error is problematic because data has been lost and the game flow is no longer synchronous as a result.
I work on a web application which gets push type updates from a rest server using a long polling technique.
It runs setTimeout() which executes a function which does a xhr GET request with a timeout of 120 seconds. It also sends the server a special "Accept-Wait" header of 60 seconds, which tells the server when to reply with a 200 and no data for the client. Then the client repeats this setTimeout. This continues forever while the client is "logged in" to the server.
I have a user using Chrome (he is only able to use Chrome so I haven't verified if this is reproducible in other browsers since no one else can reproduce this problem) who when he minimizes, the GET requests start timing out. This looks to my longpoller like the server is down.
I have enabled debug on the rest server and confirmed that it has nothing from this user for 2mins (seeming to indicate the GET requests aren't getting out of the browser).
I have also watched in the Chrome terminal (F12) "network" tab, that the requests are "cancelling" at 2mins, indicating they are timing out.
This problem also reproduces when using "localhost" which I think rules out network issues.
How can I get more information about Chrome regarding why it isn't letting http traffic out for this user?
Thanks
If this issue happens only in google chrome, maybe it is discarding your tab, you can prevent it by disabling a flag, type the following url on adress bar and see if it is enabled:
chrome://flags/#automatic-tab-discarding
I have a problem with socket.io application for nodejs. I have several browser windows in my application, all of them are connected to nodejs. Application is not a heavy loaded one yet. Sometimes something hapens to socket.io, so that .emit() command on browser is not executed (i.e. server doesn't see it). nodejs console/logs shows no crashes/exceptions. Application stays in this blocked state for ~30 seconds and then resumes correct work. Browser console log shows 400 error for one of socketio request.
Any ideas why this happens and how to cure/diagnose it?
I have found a root cause for this problem. Chrome has a limit of 6 open HTTP requrests. So you can't have more then 6 tabs doing socketio XHR poll exchange with the server. 7th tab would behave as if server is not available.
Reduce number of connections, or may be use websocket mode, it works fine.
I have a consistent extremely long (2min+) blocking call on mac/chrome. The same set of steps work just fine on other operating systems or browsers. (And even some other macs.) The site isn't super chatty, and there aren't any other requests that take anywhere over 2 seconds.
The blocking PUT call almost always follows a DELETE call to a different url. (Same server.) According to my console logs, the server actually receives and returns the results of the PUT call very quickly. So Chrome thinks it is blocked but it is actually already processed!
Any ideas?
Following up on this post: Chrome and Firefox CORS AJAX calls get aborted on some Mac machines
This mac had a DISABLED Sophos antivirus on the machine. Uninstalling it completely fixed this issue.
ARGGGGG.
Hopefully someone else finds this post and is able to spend less than the hours we spent on it.
I'm trying to troubleshoot a problem on a client's machine for our website. We're using an ajax call to get information on a page to select additional parameters. Our callback function has a block of code for reading when the ajax coming back is an error or is correct. For every other computer that we've tested this with, the ajax comes back. However, for a particular client, we're seeing the ajax come back with the error message, meaning the response never got there successfully or that it's corrupted or broken.
Does anyone know how this would happen? The client is using IE 8 and I've tested IE 8, IE 9, IE 10, and Chrome and all of those work on my computers.
EDIT: As of now, we don't have access to the system and the network that would be causing the error. They are trying to see if they can accept everything from our domain and see if that fixes it, but right now, I can't put Fiddler on their computer.
I've seen any amount of random behaviour caused by virus scanners and so-called network security products. Try adding an exception for your site to any security software your client is running.
The other thing to do is to use Wireshark, Fiddler, etc. to see what's actually happening at the network level.