Sometimes, I want to see what happens when an HTTP request is timeout.
So, Is there a plugin or configuration item that will meet my needs?
Thanks
Related
I am working with an API (I am noob at API's) and after some time I got this error "Request was throttled. Expected available in 82248 seconds." This is a really important project I am working on and I didn't know there was a possibility for this to happen (lesson learned ). I can't wait that long to make a request again, is there another way to regain access to the API? Maybe activating a VPN or something like that? Thank you in advance for your response.
HTTP error 429 means that sent too many requests within a minute to the server, and the server assumes you either do not know what you are doing and/or doing a DOS attack. Servers usually do this to make sure it can continue to work with other clients. See more details here
To solve your problem, just stop sending request on the server for couple of seconds (may be a minute depending how much you sent in the past minute. And it will work again. Rate limit may be implemented on the server globally, on a specific endpoint, or on a resource - check the API documentation for more details, here is a facebook example.
I know that different browsers have different amounts of concurrent connections they can handle to the same hostname, but what exactly happens to a new request when that limit is hit?
Does it automatically wait and retry again later or is there something I need to do to help this process along?
Specifically, if this is a XMLHttpRequest executed via JavaScript and not just some assets being loaded by the browser from markup, could that automatically try again?
I have a client side library that makes multiple API requests and occasionally it tries to send too many too quickly. When this happens, I can see server side API errors, but this doesn't make sense. If the concurrency limit stops requests, then they would have never hit the server, would they?
Update: Thanks to #joshstrike and some more testing, I've discovered that my actual problem was not related to concurrent HTTP request limits in the browser. I am not sure these even apply to JavaScript API calls. I have a race condition in the specific API calls I'm making, which gave an error that I initially misunderstood.
The browser will not retry any request on its own if that request times out on the server (for whatever reason - including if you exceed the API's limits). It's necessary to check the status of each request and handle retrying them in some way that's graceful to the application and the user. For failed requests you can check the status code. However for requests which simply hang for a long time it may be necessary to attach a counter to your request, and "cancel" it after a delay... Then if a result comes back bearing the number of one that has already been canceled, ignore that result if a newer one has already returned. This is what typically happens in a long-polling application that is hitting a server constantly and not knowing whether some pings will return later or never return at all.
When the limit on the Chrome is reached it pauses anymore requests. Once one request has been responded to, the browser sends the next request. On Chrome that limit is six for me.
Please excuse any misconceptions I may have, I'm new to HTTP requests.
I am using a file conversion API (online-convert.com) to convert an uploaded MIDI file to MP3.
The conversion process involves 3 HTTP requests that must be completed serially, I'm using Fetch for this.
The 2nd HTTP request uploads the input file to the server, and the 3rd HTTP request gets the completed job details. The issue is, if I try to make the 3rd request as soon as the 2nd request is completed, then the response states that the conversion process is still underway and the output is not ready.
My naive solution was to set up an intervalic loop that continues to make this 3rd request every x milliseconds, until the response indicates that the job is complete. This feels like an unnatural solution, and I was wondering if there's a better way to do this.
Sorry if my question is specific to the API I'm using, but any advice would be appreciated!
Thanks to Patrick Evans' comments above.
I have decided to get the completed job's details by polling as opposed to by callback URL.
I was able to implement this easily using Fetch and async/await, and found this to be easier than setting up server side scripts for a simple web tool that probably did not require them.
I have a synchronous API to invoke a server (HTTP GET) currently implemented with XMLHttpRequest.
The API does caching and will, if cache isn't deemed too old, return from the cache and invoke the server asynchronous to refresh cache.
Sometimes cache isn't available or too old, and then the API will synchronous call the server to fetch an accurate value before returning result to caller.
Result will contain a boolean success flag along with payload and clients handles result accordingly by looking at this flag.
There are two problems I can see with doing like this;
When cache isn't available and server isn't reachable or answering slow I would like to bring up a spinner so that the user is aware we are waiting for server.
In addition I would like to set a timeout value where we abort server request and handle the error accordingly.
Seems like I should be able to use setTimout operations but I have not been successful.
Preferably I would like to keep clients intact (not change the API to asynchronous).
Is there a way to achieve this?
The synchronous API was made responsive by maintaining a cache that was pulled from server asynchronous.
The cache was protected by a grace period under which we do not pull new value from server to avoid hammering the server.
For the most cases this was enough to assert there was always a cached value that could be provided to the client.
For a few cases where we have to pull new data the best solution would be to go fully asynchronous, that is also update client code.
Currently that is not an option, so in addition to above a heartbeat mechanism was put in place that toggles online/offline status to prevent trying synchronous pulls when offline.
I've recently taken over a project that uses COMET to perform some collaborative work and handle a simple chat room. The guys who originally wrote this thing made up some classes on top of STOMP and Oribited to handle all the actual chatting and messaging and logging.
The problem is that if a user closes the window or navigates to a different page or terminates the connection for whatever other reason, it takes a while for all the other users to see that he has logged off. The other users have to wait for the timestamp of the exited-user's last ping to exceed a certain duration before it registers that the user is no longer connected to the system.
The solution that I can think of requires sending out a notification in the onuload event that the user has left, so that it would notify all the other users without having to wait for a timeout. The problem with this is that since onunload will immediately terminate the connection before it's completed. From what I understand this is a problem with AJAX as well.
Now, I also have read that a Synchronous request in unload will delay the window-close/navigation until the request has finished.
So, my questions is this: does anyone know of a way to temporarily make the comet request synchronous in selected instances so it has time to finish the request before terminating? Or is there another way to solve this problem that I'm not thinking of? Thanks for your help.
Oh, also, onbeforeunload won't work because if it sends the request and the user selects "No, I want to stay on this page" it will have already have notified the other users that he has exited the chat.
tl;dr: Need a way to successfully fire a COMET request in the Unload event. We're using STOMP and Orbited for the COMET stuff.
The 'onbeforeunload' function produces a yes-no dialog only if some value is returned from it. So what you have to do is to use a SYNCHRONOUS XMLHttpRequest (AJAX) request inside the onbeforeunload function without returning anything. And you have to set the asynchronous flag of the request to false as seen in the AJAX GET request shown below:-
AJAXObject.open("GET", 'http://yourdomain/logout?somevar=something', false);
AJAXObject.send(null);
It will prevent the browser from closing until request completes and as I remember, Opera doesn't support 'onbeforeunload', so it won't work for Opera. But it works fine on IE,FF,Chrome.
If you are using comet, then you should control the server. The idea with comet is that it is not constant polling of the server. Every client should have a constant open connection to the server. As such, when the connection closes, the server should be able to send out a notification to the other clients.