I have a requirement that if my consumer read the messages from the queue and could not acknowledge some messages due to some issue. Then I want after a certain time lets say in every 30 seconds it should acknowledge pending messages. Can we do it?
Edit:
I have found channel.recover the function which will ask rabbitmq to unacknowledged message. But we have to call this message explicitly. is there a way by rabbit-mq can deliver unacknowledged messages after every certain seconds.
Related
I have set up the eSignatures API for our app and until recently it has been working perfectly. The part that is now broken is the webhook function. So when a document gets signed by one of our clients, it triggers our webhook cloud function that then updates our database (seems simple?).
The problem is that eSignatures have now updated their timeout to only 8 seconds, which means that my function does not have enough time to run and respond to their servers. It does however still run and update the database correctly, but as it takes longer than 8 seconds, the 200 response never reaches eSignatures and therefore they retry the endpoint every hour (for ~5/6 hours). I have put in a catch so that data is not duplicated when this happens, but ideally we don't want the retries to take place!
My question is basically, is there a way to send a 200 response at the beginning of the function to eSginatures, and then continue with the updates? Or is there another solution to handle this timeout? As if anything does fail in the function, I still want to return a 4xx to eSignatures and in this case we will want the retry?
You can't send a response from your Cloud Functions and then continue executing tasks. When the response is sent, the Function is stopped. See this link and this one
But even if you could, sending a response before ending the tasks that your Cloud Function prevents you from sending a 4XX response if they fail. Hence, eSginatures would never retry.
I don't think you can do much else aside from possibly optimizing your Cloud Function or increasing the eSignatures timeout, which I don't think is possible ATM.
It is needed to send an alert to the chat after a timer has started.
Scenario:
Remind me to call Bob in 5 minutes
OK, will remind you in 5 minutes
After this dialog fulfillment server will start a timer and when the time goes off, event is to be triggered.
But when using the event api in API.ai will not trigger a message to the chat window which was built using the JS api.
Is there a another way to achieve this.
This is very much possible. In the intents that you setup on API.ai, use #sys.time so that 'call me in 5 minutes' will return a time like 14:05:00. You can use this parameter to start a timer on your JS script and send the message when timer expires.
I've been looking around for a definitive answer to this but I seem to keep finding contradictory answers (ex this and this).
Basically, if I
socket.emit('game_update', {n: 1});
from a node.js server and then, 20 ms later,
socket.emit('game_update', {n: 2});
from the same server, is there any way that the n:2 message arrives before the n:1 message? In other words, does the n:1 message "block" the receiving of the n:2 message if the n:1 message somehow got lost on the way?
What if they were volatile emits? My understanding is that the n:1 message wouldn't block the n:2 message -- if the n:1 message got dropped, the n:2 message would still be received whenever it arrived.
Background: I'm building a node.js game server and want to better understand how my game updates are traveling. I'm using volatile emit right now and I would like to increase the server's tick rate, but I want to make sure that independent game updates wouldn't block each other. I would rather the client receive an update every 30 ms with a few dropped updates scattered here and there than have the client receive an update, receive nothing for 200 ms, and then receive 6 more updates all at once.
Disclaimer: I'm not completely familiar with the internals of socket.io.
is there any way that the n:2 message arrives before the n:1 message?
It depends on the transport that you're using. For the polling transport, I think it's fair to say that it's perfectly possible for messages to arrive out-of-order, because each message can arrive over a different connection.
With the websocket transport, which maintains a persistent connection, the message order is reasonably guaranteed.
What if they were volatile emits?
With volatile emits, all bets are off, it's fire-and-forget. I think that in normal situations, the server will wait (and queue up messages) for a client to be ready to receive messages, unless those messages are volatile, in which case the server will just drop them.
From what you're saying, I think that volatile emits are what you want, although once a websocket connection has been established I don't think you'll see the described scenario ("receive an update, receive nothing for 200 ms, and then receive 6 more updates all at once") is likely to happen. Perhaps only when the connection gets lost and is re-established.
The answer is yes it can possibly arrive later, but it is highly unlikely given that sockets are by nature persistent connections and reliability of order is all but guaranteed.
According to the Socket.io documentation messages will be discarded in the case that the client is not connected. This doesn't necessarily fit with your use case, however within the documentation itself it describes Volatile events as an interesting example if you need to send the position of a character.
// server-side
io.on("connection", (socket) => {
console.log("connect");
socket.on("ping", (count) => {
console.log(count);
});
});
// client-side
let count = 0;
setInterval(() => {
socket.volatile.emit("ping", ++count);
}, 1000);
If you restart the server, you will see in the console:
connect
1
2
3
4
# the server is restarted, the client automatically reconnects
connect
9
10
11
Without the volatile flag, you would see:
connect
1
2
3
4
# the server is restarted, the client automatically reconnects and sends its
buffered events
connect
5
6
7
8
9
10
11
Note: The documentation explicitly states that this will happen during a server restart, meaning that your connection to the client likely has to be lost in order for the volatile emits to be dropped.
I would say a good practice would be to write your emits as volatile just in case you do get a dropped client, however this will depend heavily on your game requirements.
As for the goal, I would recommend that you use client side prediction using some sort of dynamic time system or deltatime based on the client and server keeping a sync clock to help alleviate some of the problems you can incur. Here's an example of how you can do that, though I'm not a fan of the creators syntax, it can be easily adapted to your needs.
Hope this helps anyone who hits this topic.
Socket.io - Volatile events
Client Side Prediction
I would like to set the read timeout of the pull request on a subscription. Right now the only options are to set returnImmediately=true or just wait until the pubsub returns, which seems to be 90 seconds if no messages is published.
I'm using the gcloud-node module to make calls to pubsub. It uses the request module under the hood to make the the gcloud api calls. I've updated my local copy of gcloud-node/lib/pubsub/subscription.js to set the request timeout to 30 seconds
this.request({
method: 'POST',
uri: ':pull',
timeout: 30000,
json: {
returnImmediately: !!options.returnImmediately,
maxMessages: options.maxResults
}
}
When I do this, the behavior I see is the connection will timeout on the client side after 30 seconds, but pubsub still has the request open. If I have two clients pulling on the subscription and one of them timeout after 30 seconds, then a message is published to the topic, it is a 50/50 chance that the remaining listening client will retrieve the message.
Is there a way to tell pubsub to timeout pull connections after a certain amount of time?
UPDATE:
I probably need to clarify my example a bit. I have two clients that connect at the same time and pull from the same subscription. The only difference between the two is that the first one is configured to timeout after 30 seconds. Since two clients are connected to the same subscription, pubsub will distribute the message load between the two of them. If I publish a message 45 seconds after both clients connect, there is a 50/50 chance that pubsub will deliver the message to the second client that has not timed out yet. If I send 10 messages instead of just one, the second client will receive a subset of the 10 messages. It looks like this is because my clients are in a long poll. If the client disconnects, the server has no idea and will try to send published messages on the response of the request that was made by the client that has timed out. From my tests, this is the behavior I've observed. What I would like to do is be able to send a timeout param in the pull request to tell subpub to send back a response after a 30000ms if no messages are published during that time. Reading over the API docs, this doesn't seem like an option.
Setting the request timeout as you have is the correct way to timeout the pull after 30 seconds. The existence of the canceled request might not be what is causing the other pull to not get the message immediately. If your second pull (that does not time out) manages to pull other messages that were published earlier, it won't necessarily wait for additional message that was published after the timeout to come in before completing. It only guarantees to not return more than maxMessages, not to return only once it has exactly maxMessages (if that many are available). Once your publish completes, some later pull will get the message, but there are no guarantees on exactly when that will occur.
I'm using a timer function to check if the session is valid or not every five seconds.
setInterval(checksession,5000);
function checksession(called_for) {
//alert('check-session')
$.ajax({
type:'POST'
,url:'CheckSession'
,success: validateresult
,data: { antiCSRF : '{{acsrf}}',
session_id: '{{session_id}}'}
//,error: function(){ alert('Session check failed') }
})
}
I would like to know what will happen if I have multiple ajax calls at the same time when the session is checked. Will it be 2 separate threads?
Is this the correct way to check session?
So first off, you're better off (imo) using setTimeout for this rather than setInterval. You really want your next check to happen x seconds after you have the answer from the previous check, not every x seconds regardless (b/c of server lag, latency on the network, whatever). Bottom line, imo it's better to do setTimeout then do another `setTimeout in the ajax callback.
JS is single threaded, so you won't do it in two separate threads, but you can have multiple ajax calls pending at once, which I think is what you mean.
Finally, on "correct". It really depends a bit on what you're trying to accomplish. In general, sessions with sliding expirations (the only real time that any 'check' to the server should matter, since otherwise they can get the expiry once and count the time on the client) are there to timeout after a period of inactivity. If you're having your script 'fake' activity by pinging the server every five seconds, then you might as well just set your expiry to infinity and not have it expire ever. Or set it to expire when the browser window closes, either way.
If you're trying to gracefully handle an expired session, the better way to handle it in Ajax is to just handle the 401 error the server should be sending you if you're not logged in anymore.