I am working with an API (I am noob at API's) and after some time I got this error "Request was throttled. Expected available in 82248 seconds." This is a really important project I am working on and I didn't know there was a possibility for this to happen (lesson learned ). I can't wait that long to make a request again, is there another way to regain access to the API? Maybe activating a VPN or something like that? Thank you in advance for your response.
HTTP error 429 means that sent too many requests within a minute to the server, and the server assumes you either do not know what you are doing and/or doing a DOS attack. Servers usually do this to make sure it can continue to work with other clients. See more details here
To solve your problem, just stop sending request on the server for couple of seconds (may be a minute depending how much you sent in the past minute. And it will work again. Rate limit may be implemented on the server globally, on a specific endpoint, or on a resource - check the API documentation for more details, here is a facebook example.
Related
I know that different browsers have different amounts of concurrent connections they can handle to the same hostname, but what exactly happens to a new request when that limit is hit?
Does it automatically wait and retry again later or is there something I need to do to help this process along?
Specifically, if this is a XMLHttpRequest executed via JavaScript and not just some assets being loaded by the browser from markup, could that automatically try again?
I have a client side library that makes multiple API requests and occasionally it tries to send too many too quickly. When this happens, I can see server side API errors, but this doesn't make sense. If the concurrency limit stops requests, then they would have never hit the server, would they?
Update: Thanks to #joshstrike and some more testing, I've discovered that my actual problem was not related to concurrent HTTP request limits in the browser. I am not sure these even apply to JavaScript API calls. I have a race condition in the specific API calls I'm making, which gave an error that I initially misunderstood.
The browser will not retry any request on its own if that request times out on the server (for whatever reason - including if you exceed the API's limits). It's necessary to check the status of each request and handle retrying them in some way that's graceful to the application and the user. For failed requests you can check the status code. However for requests which simply hang for a long time it may be necessary to attach a counter to your request, and "cancel" it after a delay... Then if a result comes back bearing the number of one that has already been canceled, ignore that result if a newer one has already returned. This is what typically happens in a long-polling application that is hitting a server constantly and not knowing whether some pings will return later or never return at all.
When the limit on the Chrome is reached it pauses anymore requests. Once one request has been responded to, the browser sends the next request. On Chrome that limit is six for me.
We all know these online and offline events in the browser. It doesn't work very well (it does absolutely different thing).
Right now on our site it is implemented spamming our backend server every second with a request. I suggested to send head request to / of our domain and as it is Single Page Application, then it should be quite fast and no need to spam backend. But customer said that we can ping the gateway, THE FIRST POINT OF THE ISP.
I am not sure how to implement it in browser. First of all, not sure if it's easy to get in the browser ICP first point and then ping maybe disabled there, it is quite common practice.
Could you please suggest me anything ?
I've just completed investigation of the proposal of my customer: ping gateway, first point of the ISP. Browser doesn't provide us such capabilities, in general idea might be good, though even if ping of this ip address works ok, it doesn't guarantee that internet is working, but anyway browser doesn't provide such capabilities, and ping (ICMP call) might be disabled on this ip address, so I'd go with my first idea: I'll handle it on nginx level (to avoid backend burden) and let's see how it works.
I am creating a question answering application using Node.js + Express for my back-end. Front-end sends the question data to the back-end, which in turn makes requests to multiple third-party APIs to get the answer data.
Problem is, some of those third-party APIs take too long to respond, since they have to do some intense processing and calculations. For that reason, i have already implemented a caching system that saves answer data for each different question. Nevertheless, that first request each time might take up to 5 minutes.
Since my back-end server waits and does not respond back to the front-end until data arrives (the connections are being kept open), it can only serve 6 requests concurrently (that's what I have found). This is unacceptable in terms of performance.
What would be a workaround to this problem? Is there a way to not "clog" the server, so it can serve more than 6 users?
Is there a design pattern, in which the servers gives an initial response, and then serves the full data?
Perhaps, something that sets the request to "sleep" and opens up space for new connections?
Your server can serve many thousands of simultaneous requests if things are coded properly and it's not CPU intensive, just waiting for network responses. This is something that node.js is particularly good at.
A single browser, however, will only send a few requests at a time (it varies by browser) to the same endpoint (queuing the others until the earlier ones finish). So, my guess is that you're trying to test this from a single browser. That's not going to test what you really want to test because the browser itself is limiting the number of simultaneous requests. node.js is particularly good at having lots of request in flight at the same time. It can easily do thousands.
But, if you really have an operation that takes up to 5 minutes, that probably won't even work for an http request from a browser because the browser will probably time out an inactive connection still waiting for a result.
I can think of a couple possible solutions:
First, you could make the first http request be to just start the process and have it return immediately with an ID. Then, the client can check every 30 seconds of so after that sending the ID in an http request and your server can respond whether it has the result yet or not for that ID. This would be a client-polling solution.
Second, you could establish a webSocket or socket.io connection from client to server. Then, send a message over that socket to start the request. Then, whenever the server finishes its work, it can just send the result directly to the client over the webSocket or socket.io connection. After receiving the response, the client can either keep the webSocket/socket.io connection open for use again in the future or it can close it.
I am creating a Chrome extension that checks gmail inbox.
I'm using the xml feed url to fetch it:
https://mail.google.com/mail/u/0/feed/atom
For updating, I'm using chrome.alarms API that sends GET request every 10 seconds. Is that too much? If not, could I change it to 1 second too? How much load does their server have to handle in order to send me the feed's information?
Using XHR every 10 seconds looks like not bad idea, but every second might be too much. Each XHR request creates new connection, sending lots of data to server, and lots of data will receive. If you need a real-time app, please consider to using Websocket or Socket.io instead. They are very lightweight, fast and easy to use.
Notification APIs (gmail seems to have them)
You can get updates with a lot less latency if you use a different technique such as "long polling" or "web sockets". This is very useful for things where you want no lag, like a real-time chat, a web-based video game, or a time-sensitive process like an auction or ticket/order queue. It might be a bit less important for something less real-time, like email (see the last paragraph in this answer).
Gmail seems to have an API that is explicitly designed for fast notifications.
HTTP polling
Most web servers can stand up to being called once a second without killing their site. If they can't, then they have some serious security problems (Denial of Service attacks, or worse if their site is buggy enough to suffer data loss when overloaded).
Google has big servers, and protection, so you don't really need to worry about what they can handle, as long as they don't block you. Google may rate limit calls to their gmail API, and you may end up getting a user throttled if you call their API more then they prefer. Consult their documentation to find out what their rate limiting policies are.
To more generically answer you question, normal HTTP isn't really optimized for frequent polling for refreshed data. You can make a decent number of requests (even upwards of one a second or more). You probably won't kill the computer or browser or even make them run slowly, as long as the request and response payload data is small enough, and the processing/dom changes you do with the response are minimal when the data comes back unchanged.
Assuming you don't violate per-site rate limits, and have small data payloads in the request you are making, then the biggest problem is that you might still be wasting a lot of bandwidth. For people who have to pay by the minute/megabyte, this is a problem. This is much more frequent in Europe than in the United States (although it is also frequent on cellular devices).
Consider if you really need email to be checked every second. Would every ten seconds be fine? Maybe every minute? Maybe your app could do a refresh when you send mail, but take longer to check for new mail when it is idle? Think about what use cases you are solving for before assuming everything always has to be constantly updated. What would waiting a few extra seconds break? If the answer is nothing, then you can safely slow down your updates. If there are use cases that would be broken or annoying, then figure out why it is important. See if there are even better ways to address that use case, or if updating frequently is actually the right answer. Sometimes it is!
I read thru all documentation, forums and examples I could find but could not find a description on how the pushstream module behaves in the following situation:
I am using nginx+pushstream to deliver status messages at a session queue for users that requested actions that take a little while on the server side.
Using the long polling technique the client is re-connecting every time a message was delivered or the connection timeout is reached.
If there are many messages sent to the subscribed queue at the same time, is it possible that the client could miss a message while he is re-connecting? Or is this situation handled by the pushstream module?
Thanks to everyone taking time to read and answer! :-)
Some random search for a different topic turned up a thread in the Google Groups that answers the question.
The pushstream module developer states in a response:
About your goal, you may set If-Modified-Since header when connecting
as current time on new user connect. With that it will only receive
messages sent after this time.
I'm only afraid you may loose some message using long polling without
store messages or with a small
push_stream_max_messages_stored_per_channel.
Source: https://groups.google.com/forum/#!topic/nginxpushstream/4VutBQwx3zM
This means that it is not possible to loose messages if messages are stored (push_stream_store_messages is set to on).
The HTTP-Headers If-None-Match and If-Modified-Since will make sure of this.