Google Pub/Sub How to set read timeout on Pull - javascript

I would like to set the read timeout of the pull request on a subscription. Right now the only options are to set returnImmediately=true or just wait until the pubsub returns, which seems to be 90 seconds if no messages is published.
I'm using the gcloud-node module to make calls to pubsub. It uses the request module under the hood to make the the gcloud api calls. I've updated my local copy of gcloud-node/lib/pubsub/subscription.js to set the request timeout to 30 seconds
this.request({
method: 'POST',
uri: ':pull',
timeout: 30000,
json: {
returnImmediately: !!options.returnImmediately,
maxMessages: options.maxResults
}
}
When I do this, the behavior I see is the connection will timeout on the client side after 30 seconds, but pubsub still has the request open. If I have two clients pulling on the subscription and one of them timeout after 30 seconds, then a message is published to the topic, it is a 50/50 chance that the remaining listening client will retrieve the message.
Is there a way to tell pubsub to timeout pull connections after a certain amount of time?
UPDATE:
I probably need to clarify my example a bit. I have two clients that connect at the same time and pull from the same subscription. The only difference between the two is that the first one is configured to timeout after 30 seconds. Since two clients are connected to the same subscription, pubsub will distribute the message load between the two of them. If I publish a message 45 seconds after both clients connect, there is a 50/50 chance that pubsub will deliver the message to the second client that has not timed out yet. If I send 10 messages instead of just one, the second client will receive a subset of the 10 messages. It looks like this is because my clients are in a long poll. If the client disconnects, the server has no idea and will try to send published messages on the response of the request that was made by the client that has timed out. From my tests, this is the behavior I've observed. What I would like to do is be able to send a timeout param in the pull request to tell subpub to send back a response after a 30000ms if no messages are published during that time. Reading over the API docs, this doesn't seem like an option.

Setting the request timeout as you have is the correct way to timeout the pull after 30 seconds. The existence of the canceled request might not be what is causing the other pull to not get the message immediately. If your second pull (that does not time out) manages to pull other messages that were published earlier, it won't necessarily wait for additional message that was published after the timeout to come in before completing. It only guarantees to not return more than maxMessages, not to return only once it has exactly maxMessages (if that many are available). Once your publish completes, some later pull will get the message, but there are no guarantees on exactly when that will occur.

Related

How do you handle a short webhook timeout? (Node.js)

I have set up the eSignatures API for our app and until recently it has been working perfectly. The part that is now broken is the webhook function. So when a document gets signed by one of our clients, it triggers our webhook cloud function that then updates our database (seems simple?).
The problem is that eSignatures have now updated their timeout to only 8 seconds, which means that my function does not have enough time to run and respond to their servers. It does however still run and update the database correctly, but as it takes longer than 8 seconds, the 200 response never reaches eSignatures and therefore they retry the endpoint every hour (for ~5/6 hours). I have put in a catch so that data is not duplicated when this happens, but ideally we don't want the retries to take place!
My question is basically, is there a way to send a 200 response at the beginning of the function to eSginatures, and then continue with the updates? Or is there another solution to handle this timeout? As if anything does fail in the function, I still want to return a 4xx to eSignatures and in this case we will want the retry?
You can't send a response from your Cloud Functions and then continue executing tasks. When the response is sent, the Function is stopped. See this link and this one
But even if you could, sending a response before ending the tasks that your Cloud Function prevents you from sending a 4XX response if they fail. Hence, eSginatures would never retry.
I don't think you can do much else aside from possibly optimizing your Cloud Function or increasing the eSignatures timeout, which I don't think is possible ATM.

Acknolwedge messages on intervals in rabbitMQ?

I have a requirement that if my consumer read the messages from the queue and could not acknowledge some messages due to some issue. Then I want after a certain time lets say in every 30 seconds it should acknowledge pending messages. Can we do it?
Edit:
I have found channel.recover the function which will ask rabbitmq to unacknowledged message. But we have to call this message explicitly. is there a way by rabbit-mq can deliver unacknowledged messages after every certain seconds.

Get an http call from express to angular when event happens

i'm building an angular app that will make about a thousand people to connect simultaneously to book a ticket. I want only "XYZ" of them to access simultaneously at the registration Angular component. The other ones will see a "waiting room" component until it's their turn.
I set up the whole thing like this:
User enters the page.
I make an http call to expressjs server
The server checks if the "connections" collection constains less than XYZ docs
If true, it unlocks the user registation component and with an http post req, it creates a new doc in the db. if false it leaves it hidden and shows up the waitingroom component
When user leaves the page, his doc in "connections" collection gets destroyed with an http delete call.
Fully working.
The problem is now that i want to create a kind of "priority" system, because, going like that, if you just refresh you may be lucky and get access, even if you are soon arrived and there is who is waiting since 1990's. So i introduced a "priority" system. When the user makes the first http call, if user is not allowed, the server creates a timestamp and pushes it into an array.
const timestamps = []
.
.
.
// this below is in http get req
Connessione.countDocuments({},(err,count)=>{
if(count<=nmax){
console.log("Ok")
res.status(200).json({allowed: true})
}
else{
const timestamp = req.params.timestamp;
timestamps.push(timestamp);
console.log("Semo troppi")
res.status(401).json({allowed: false})
}
});
The idea is to listen to db changes, and when there is just XYZ-1 in the db. Make a call to the first timestamp's angular frontend to say him: "Hey there, if you want we're done. You can go" and unlock him the access to registration component.
The problem is that i can't make continuous http requests every second from angular until there's a free place...
Is there any method to send a request at the server, and when server says OK, calls angular and says "Hey dude. You can go!"?
Hope you understood my question. If not ask me in the comments.
Thanks in advance
Even i had trouble with sockets in the beginning so i'll try to explain the concept in a simple way, Whenever you write an API or Endpoint you have a one way connection i.e. you send request to server and it return back some response as shown below.
Event 1:
(Client) -> Request -> (Server)
Event 2:
(Client) <- Response <- (Server)
For API's, without request you cannot get response.
To overcome this issue as of now i can think of two possible ways.
Using Sockets, With sockets you can create a two way connection. Something like this
(Server) <-> data <-> (Client)
It means you can pass data both ways, Client to server and Server to client. So whenever an event occurs(some data is added or updated in database) one can emit or broadcast it to the client and the client can listen to the socket and receive it.
In your case as it's a two connection you can emit the data from angular and
I've attached few links at the bottom. please have a look.
Using XML/AJAX Request, This is not a preferable method, using setInterval you can call the server in every 5 seconds or so and do the operation needed.
setInterval(ajaxCall, 5000); //5000 MS == 5 seconds
function ajaxCall() {
//do your AJAX stuff here
}
Links:
https://socket.io/docs/
https://alligator.io/angular/socket-io/

Can Socket.io emits arrive out of order? What if volatile?

I've been looking around for a definitive answer to this but I seem to keep finding contradictory answers (ex this and this).
Basically, if I
socket.emit('game_update', {n: 1});
from a node.js server and then, 20 ms later,
socket.emit('game_update', {n: 2});
from the same server, is there any way that the n:2 message arrives before the n:1 message? In other words, does the n:1 message "block" the receiving of the n:2 message if the n:1 message somehow got lost on the way?
What if they were volatile emits? My understanding is that the n:1 message wouldn't block the n:2 message -- if the n:1 message got dropped, the n:2 message would still be received whenever it arrived.
Background: I'm building a node.js game server and want to better understand how my game updates are traveling. I'm using volatile emit right now and I would like to increase the server's tick rate, but I want to make sure that independent game updates wouldn't block each other. I would rather the client receive an update every 30 ms with a few dropped updates scattered here and there than have the client receive an update, receive nothing for 200 ms, and then receive 6 more updates all at once.
Disclaimer: I'm not completely familiar with the internals of socket.io.
is there any way that the n:2 message arrives before the n:1 message?
It depends on the transport that you're using. For the polling transport, I think it's fair to say that it's perfectly possible for messages to arrive out-of-order, because each message can arrive over a different connection.
With the websocket transport, which maintains a persistent connection, the message order is reasonably guaranteed.
What if they were volatile emits?
With volatile emits, all bets are off, it's fire-and-forget. I think that in normal situations, the server will wait (and queue up messages) for a client to be ready to receive messages, unless those messages are volatile, in which case the server will just drop them.
From what you're saying, I think that volatile emits are what you want, although once a websocket connection has been established I don't think you'll see the described scenario ("receive an update, receive nothing for 200 ms, and then receive 6 more updates all at once") is likely to happen. Perhaps only when the connection gets lost and is re-established.
The answer is yes it can possibly arrive later, but it is highly unlikely given that sockets are by nature persistent connections and reliability of order is all but guaranteed.
According to the Socket.io documentation messages will be discarded in the case that the client is not connected. This doesn't necessarily fit with your use case, however within the documentation itself it describes Volatile events as an interesting example if you need to send the position of a character.
// server-side
io.on("connection", (socket) => {
console.log("connect");
socket.on("ping", (count) => {
console.log(count);
});
});
// client-side
let count = 0;
setInterval(() => {
socket.volatile.emit("ping", ++count);
}, 1000);
If you restart the server, you will see in the console:
connect
1
2
3
4
# the server is restarted, the client automatically reconnects
connect
9
10
11
Without the volatile flag, you would see:
connect
1
2
3
4
# the server is restarted, the client automatically reconnects and sends its
buffered events
connect
5
6
7
8
9
10
11
Note: The documentation explicitly states that this will happen during a server restart, meaning that your connection to the client likely has to be lost in order for the volatile emits to be dropped.
I would say a good practice would be to write your emits as volatile just in case you do get a dropped client, however this will depend heavily on your game requirements.
As for the goal, I would recommend that you use client side prediction using some sort of dynamic time system or deltatime based on the client and server keeping a sync clock to help alleviate some of the problems you can incur. Here's an example of how you can do that, though I'm not a fan of the creators syntax, it can be easily adapted to your needs.
Hope this helps anyone who hits this topic.
Socket.io - Volatile events
Client Side Prediction

What is $http timeout mean?

When using $http we can set the timeout for it and it will look something like this:
$http.get(url,{timeout: 5000}).success(function(data){});
What is that timeout mean? Is it mean the connection (data download) must be completed within the timeout period? Or it is mean the delay time to receive respond from the server? What would be the best general minimal timeout setting for mobile connection?
If the http request does not complete within the specified timeout time, then an error will be triggered.
So, this is kind of like saying the following to the $http.get() function:
I'd like you to fetch me this URL and get me the data
If you do that successfully, then call my .success() handler and give me the data.
If the request takes longer than 5000ms to finish, then rather than continue to wait, trigger a timeout error.
FYI, it looks to me like AngularJS has converted to using standard promise syntax, so you should probably be doing this:
$http.get(url,{timeout: 5000}).then(function(data){
// successfully received the data here
}, function(err) {
// some sort of error here (could be a timeout error)
});
What is that timeout mean? Is it mean the connection (data download) must be completed within the timeout period?
Yes. If not completed within that time period, it will return an error instead. This avoids waiting a long time for a request.
Or it is mean the delay time to receive respond from the server?
No, it is not a delay time.
What would be the best general minimal timeout setting for mobile connection?
This is hard to say without more specifics. Lots of different things might drive what you would set this to. Sometimes, there is no harm in letting the timeout be a fairly long value (say 120 seconds) in case the server or some mobile link happens to be particularly slow some day and you want to give it as much chance as possible to succeed in those circumstances. In other cases (depending upon the particular user interaction), the user is going to give up anyway if the response time is more than 5 seconds so there may be no value in waiting longer than that for a result the user will have already abandoned.
timeout – {number|Promise} – timeout in milliseconds, or promise that should abort the request when resolved.
Source
Timeout means "perform an action after X time", in JS anyway.

Categories