Does an http request always complete? - javascript

If an http request is made and the caller abandons the request does it get completed anyway? For example an asynchronous JavaScript GET request to log a banner click in the DB then redirect. Does the script need to wait for the response?

How critical is your request? What if the database is not available at that time? What if the server side code throws an exception?
For very critical requests, you may need to implement some sort of message queueing that is able to hold the request data until it can be fully processed. This gets more complicated if you are dealing with grids and clouds (you can't just queue the message on a single node, since the node can potentially have a hardware failure). But this is an extreme case, where you end up with dedicated queue servers.

The client doesn't send any notification to the server that it is canceling the request.
PHP doesn't know if the client has disconnected until it tries to send the client some data (eg., an unbuffered echo() call), so if your script doesn't return any data to the user, it will fully execute. If it does return data it may abort partway through, but this can be changed with ignore_user_abort())
If you're using a different environment, you will have to explore the documentation.

For most cases, once the request is received by the server, it will not stop processing if the client stops listening.
However, the server can always fail while servicing the request, so it's probably not a good idea to assume it completed.

You should wait for it to be safe. You never know when the server is going to get around to processing your request (though it is usually within a couple hundred milliseconds or less), so you won't know if something timed out, failed, or if you were going to receive a different response than expected unless you wait.

You don't have to wait for the response in order for the request to reach the server. The server can check if someone is still listening while processing the request, but processing the request will start even if noone is listening for the response (unless there was an error on the way, of course).
If you want to be sure that the request really was processed, you should wait for the response, but it's not required for the request to go through to the server.

Related

What happens when triggering a GET request while a http2 push is in-flight for the same resource

What happens when triggering a single GET request, while simultaneously a http2 push is in-flight for the same resource?
What is the specified behavior and what do the browsers actually do?
An example scenario could look like this:
at time 0: GET / (get document) and the server pushes /data.json
at time 1: GET /data.json (triggered by script, while the h2 push is still not finished / in-flight)
Will this result in two calls towards the server? Is this behavior specified or browser specific, e.g. in Chromium maybe via the HTTP Cache:
The cache implements a single writer - multiple reader lock so that only one network request for the same resource is in flight at any given time.
https://www.chromium.org/developers/design-documents/network-stack/http-cache
The HTTP/2 specification in RFC 7540 says:
Once a client receives a PUSH_PROMISE frame and chooses to accept the
pushed response, the client SHOULD NOT issue any requests for the
promised response until after the promised stream has closed.
So it seems to be likely that the request will wait for the push response to be delivered, if the server does not take too long to start sending:
If the client determines, for any reason, that it does not wish to
receive the pushed response from the server or if the server takes
too long to begin sending the promised response, the client can send
a RST_STREAM frame, using either the CANCEL or REFUSED_STREAM code
and referencing the pushed stream's identifier.

Ignore response from a PUT call - javascript

I've a JS (Angular) client that makes a PUT request (REST API) to server and server sends back a large payload that I'm not using in the client currently.
Is there a way to just fire the request and ignore any response that comes back? The main need here is to avoid the data cost incurred by receiving that payload. I've looked at closing the connection once the request is fired, but am not sure if that's the best way to handle this.
If able, I think the only way to change this would be to change the api endpoint to not include a payload from the put request.
I'm assuming you are using angular's http class and using Observables. But even if you aren't, your angular client is going to need to read the response status sent back from the server to determine whether or not the put request was successful or not. In order to read the status, you'll need to response, and unfortunately the full response sent from the server.
You could close the connection right after the request, but as I've mentioned you'll have no way of knowing whether or not the request was successful.
To ignore the request just don't do anything if the request is successful.
If you don't want the request to exist at all then do it on the backend.

How do I cancel Node.JS request handling when that request is aborted?

What is the proper way to stop the chain of events on a Node server that was set in motion after it received a request, when that request is cancelled?
E.g.
When you press [escape] while a request is done by a browser, the request is aborted.
When you call .abort() on a jQuery $.ajax().
etc
For just serving some html, this wouldn't be a big deal. The Node server renders some text and nobody listens to the output. Not a big deal.
But when the Node server is actually doing a lot of processing for the response, it would be nice to be able to stop and use the resources for something else.
Ideally some kind of trick to req itself, so I don't have to change whole batches of code to cancel each promise individually.

spine.js: Does it really 'pipeline' POSTs?

I was reading this post from Alex Maccaw, where he states :
The last issue is with Ajax requests that get sent out in parallel. If a user creates a record, and then immediately updates the same record, two Ajax requests will be sent out at the same time, a POST and a PUT. However, if the server processes the 'update' request before the 'create' one, it'll freak out. It has no idea what record needs updating, as the record hasn't been created yet.
The solution to this is to pipeline Ajax requests, transmitting them serially. Spine does this by default, queuing up POST, PUT and DELETE Ajax requests so they're sent one at a time. The next request sent only after the previous one has returned successfully.
But the HTTP spec Sec 8.1.2.2 Pipelining says:
Clients SHOULD NOT pipeline requests using non-idempotent methods or non-idempotent sequences of methods (see section 9.1.2). Otherwise, a premature termination of the transport connection could lead to indeterminate results.
So, does Spine really 'pipeline' POSTs ?
Maccaw's usage of the term "pipelineing" and that of the HTTP spec are not the same here. Actually, they're opposite.
In the HTTP spec, the term "pipelining" means sending multiple requests without waiting for a response. See section 8.1.2.2.
A client that supports persistent connections MAY "pipeline" its
requests (i.e., send multiple requests without waiting for each
response).
Based on this definition, you can see why the spec would strongly discourage pipelining non-idempotent requests, as one of the pipelined requests might change the state of the app, with unexpected results.
When Maccaw writes about spine's "pipelining", he's actually referring to the solution to the fact that the client will "pipeline" requests without waiting for a response, as per the HTTP spec. That is, spinejs will queue the requests and submit them serially, each consecutive request being made only after its predecessor completes.

Is it safe to invoke a AJAX call and let the browser cancel the request?

If I make a AJAX reqeust it will be displayed in the network tab in Chrome. If I in the same moment makes a client based redirect, the AJAX request will canceled. But will the request make it to the server and execute as normal? Is it something in HTTP/TCP that know's that the client has canceled the request? I don't think so, but I want to be sure.
If you're running PHP server-side, it will stop processing in the event of a client-side abort. (From what I've read, this isn't the case with other server-side technologies, which will continue processing after a client aborts.) See:
http://php.net/manual/en/features.connection-handling.php
But, it's best not to assume anything one way or another. The browser may cancel the request. And this cancellation may occur in time to stop processing server-side. But, that's not necessarily the case. The client could cancel at any stage during the request -- from right before the request is actually sent to just after a response body is sent. Also bear in mind, there are other things which can interrupt server-side request processing (hardware, power, OS failure, etc.). Expect some unpredictability.
From this, I'd make two recommendations:
Write your code be as transaction-safe as possible. If a request makes data changes, don't commit them until all changes have been piped to the database. And if your application relies on multiple AJAX requests to change some data, don't commit any of the changes until the the end of "final" AJAX request.
Do not assume, even if a request finishes, that the client receives the response. Off the top of my head, that means if your application is AJAX-heavy, always rely on client-side state to tell the server what information it has, rather than relying on server-side state to assume the client "knows" something.
This is one of the few cases where synchronous requests (async: false in the $.ajax(...) options) are appropriate. This usually avoids the browser from navigating to the other page until the request has finished.

Categories