How does AbortController interrupt HTTP request? [duplicate] - javascript

As of about 2017, it has been possible to abort a fetch and immediately move-on in front-end javascript. I'm having trouble finding any information on what this does to the HTTP connection, though. Does it close it prematurely, or does the browser keep the request open, and just discard the response as it arrives?
I have a use-case where a user is making a (potentially) expensive database calls from a webapp frontend. Sometimes, they notice that a request is taking too long, and manually cancel it. I would love to be able to take that signal and cancel my expensive database query, since they're no longer interested in the results.
Is there any way that my REST server can tell that the fetch has been aborted? (My server is a Java Jersey/Grizzly.)

It's generally supposed to:
https://fetch.spec.whatwg.org/#http-network-fetch step 17.2.3/4
If aborted, then:
If fetchParams is aborted, then:
Set response’s aborted flag.
If stream is readable, then error stream with the result of
deserialize a serialized abort reason given fetchParams’s controller’s
serialized abort reason and an implementation-defined realm.
Otherwise, if stream is readable, error stream with a TypeError.
If connection uses HTTP/2, then transmit an RST_STREAM frame.
Otherwise, the user agent should close connection unless it would be
bad for performance to do so.
For instance, the user agent could keep the connection open if it
knows there’s only a few bytes of transfer remaining on a reusable
connection. In this case it could be worse to close the connection and
go through the handshake process again for the next fetch.
However, Firefox currently does not (https://bugzilla.mozilla.org/show_bug.cgi?id=1568422).

Related

What happens at the browser level when a max concurrent HTTP request limit is hit?

I know that different browsers have different amounts of concurrent connections they can handle to the same hostname, but what exactly happens to a new request when that limit is hit?
Does it automatically wait and retry again later or is there something I need to do to help this process along?
Specifically, if this is a XMLHttpRequest executed via JavaScript and not just some assets being loaded by the browser from markup, could that automatically try again?
I have a client side library that makes multiple API requests and occasionally it tries to send too many too quickly. When this happens, I can see server side API errors, but this doesn't make sense. If the concurrency limit stops requests, then they would have never hit the server, would they?
Update: Thanks to #joshstrike and some more testing, I've discovered that my actual problem was not related to concurrent HTTP request limits in the browser. I am not sure these even apply to JavaScript API calls. I have a race condition in the specific API calls I'm making, which gave an error that I initially misunderstood.
The browser will not retry any request on its own if that request times out on the server (for whatever reason - including if you exceed the API's limits). It's necessary to check the status of each request and handle retrying them in some way that's graceful to the application and the user. For failed requests you can check the status code. However for requests which simply hang for a long time it may be necessary to attach a counter to your request, and "cancel" it after a delay... Then if a result comes back bearing the number of one that has already been canceled, ignore that result if a newer one has already returned. This is what typically happens in a long-polling application that is hitting a server constantly and not knowing whether some pings will return later or never return at all.
When the limit on the Chrome is reached it pauses anymore requests. Once one request has been responded to, the browser sends the next request. On Chrome that limit is six for me.

Slow third-party APIs clogging Express server

I am creating a question answering application using Node.js + Express for my back-end. Front-end sends the question data to the back-end, which in turn makes requests to multiple third-party APIs to get the answer data.
Problem is, some of those third-party APIs take too long to respond, since they have to do some intense processing and calculations. For that reason, i have already implemented a caching system that saves answer data for each different question. Nevertheless, that first request each time might take up to 5 minutes.
Since my back-end server waits and does not respond back to the front-end until data arrives (the connections are being kept open), it can only serve 6 requests concurrently (that's what I have found). This is unacceptable in terms of performance.
What would be a workaround to this problem? Is there a way to not "clog" the server, so it can serve more than 6 users?
Is there a design pattern, in which the servers gives an initial response, and then serves the full data?
Perhaps, something that sets the request to "sleep" and opens up space for new connections?
Your server can serve many thousands of simultaneous requests if things are coded properly and it's not CPU intensive, just waiting for network responses. This is something that node.js is particularly good at.
A single browser, however, will only send a few requests at a time (it varies by browser) to the same endpoint (queuing the others until the earlier ones finish). So, my guess is that you're trying to test this from a single browser. That's not going to test what you really want to test because the browser itself is limiting the number of simultaneous requests. node.js is particularly good at having lots of request in flight at the same time. It can easily do thousands.
But, if you really have an operation that takes up to 5 minutes, that probably won't even work for an http request from a browser because the browser will probably time out an inactive connection still waiting for a result.
I can think of a couple possible solutions:
First, you could make the first http request be to just start the process and have it return immediately with an ID. Then, the client can check every 30 seconds of so after that sending the ID in an http request and your server can respond whether it has the result yet or not for that ID. This would be a client-polling solution.
Second, you could establish a webSocket or socket.io connection from client to server. Then, send a message over that socket to start the request. Then, whenever the server finishes its work, it can just send the result directly to the client over the webSocket or socket.io connection. After receiving the response, the client can either keep the webSocket/socket.io connection open for use again in the future or it can close it.

Most efficient way to have nodejs ignore http and https requests -- request.abort()?

What is the least resource intensive way to have nodejs ignore requests unless they meet a specific criteria in the incoming headers?
For example, if the referer is a specific domain, or the url requested contains a specific code or page, then allow it, otherwise reject with no response and minimized resource utilization / connection tie-ups.
The end result being that the requestor's request is cut off immediately, ties up no connections, sends no response, and to the requestor makes it appear the server does not exist.
Kind of like a firewall does where requests are ignored with no response provided. A firewall does this, but would like to know how to do something similar with requests that make it to the nodejs server,especially if they come in large numbers very rapidly.
Currently using request.end() and connection.destroy() to immediately end the request but is this is the best approach, especially since request.end() sends an empty response.
Is request.abort() a better (more resource efficient) way to handle this situation?

Verify connection to server in a SPA

I'm developing a AngularJS driven SPA. I want to show a fullscreen overlay saying something like "Sorry - Server is not available atm" as soon as the connection between client and server is interrupted.
So I wrote a simple while(true) loop which is basically just calling /api/ping which always returns true. When there is no http response tho, then the message shall appear.
I don't like this approach since it creates a lot of overhead. Especially considering mobile connections.
So what would be a decent solution for my case?
I could imagine work with a websocket and an onInterrupted (or similar) event.
Basically, when an HTTP endpoint is offline you'll receive a HTTP/404 Not found status code.
I would say that it should be better wait for the error rather than querying the server overtime to check if it's online or not.
Use Angular $http provider's interceptors to catch all unsuccessful HTTP status codes and perform an action when you catch one like showing notification to the user to notify that there's no connection available to the server or the server is offline.
With WebSockets you still need to have some sort of ping mechanism, since there is functionally no difference between an inactive/very slow connection and a dropped connection.
Having said that, the overhead regarding wire traffic is much lower, since no HTTP headers are sent on each request, and this will result in lower processing overheads as well.

Will Keep-alive kill async connection

Let's say my browser post a HTTP request to a domain, before this request finish, another different request (by ajax) was send to the same domain. Since the first request still on-going and not yet terminated, will that mean second request will have to wait first request to finish in order to use the persistent connection that being used by first request? If it is, how to prevent this? If I have a long streaming connection in the first request, does that mean second request will need to hang around for long time?
(Let's assume the maximum persistent connection for browser is one. Actually I don't really understand what this "max persistent connection" does. Does it mean when the persistent connection is over the maximum number, the rest of connection will become non-persistent ? Confusing...)
Can anyone explain this?
Since the first request still on-going and not yet terminated, will that mean second request will have to wait first request to finish in order to use the persistent connection that being used by first request?
No. The two requests are still asynchronous and in parallel (unless the server limits this).
HTTP Keep Alive only means that they are faster because both requests can use the same connection, especially when pipelining them.
However, if there is no pipelining, the browser could also decide to open a second connection for the second request, instead of waiting for the first request to finish and reusing its connection. See Under what circumstances will my browser attempt to re-use a TCP connection for multiple requests? for details.
I don't really understand what this "max persistent connection" does. Does it mean when the persistent connection is over the maximum number, the rest of connection will become non-persistent?
No. When the limit is reached, new requests will have to wait until a connection from the pool becomes usable again.

Categories