Strange situation.
I try to start chat application.
I use postgresql 9.3 and tomcat as web server.
What is happens when one browser sending message another:
1 - Broswer A send message to server (tomcat)
2 - Tomcat put msg into database and get his id
INSERT INTO messages VALUES('first message') returning into MSGID id
3 - Tomcat resend message to Browser B (websocket recipient)
4 - Browser B send system answer: MSGID_READED
5 - Tomcat update database message
UPDATE messages SET readtime = now() WHERE id = MSGID
All works, but sometimes at point 5 update can't find message by MSGID...
Very strange, coz at point 2 I getting message record ID, but at 5, not.
May postgresql write slowly and this record not allow (not visible) from parallel db connection?
UPDATE
I found solution for me, just put insert inside begin/exception/end block.
BEGIN
INSERT INTO messages (...)
VALUES (...)
RETURNING id INTO MSGID;
EXCEPTION
WHEN unique_violation THEN
-- nothing
END;
UPDATE 2
In detail tests above changes with BEGIN block has no effects.
Solution in Javascript! I sent websocket messages from other thread and problem solved!
// WebSocket send message function
// Part of code. so is a web socket
send = function(msg) {
if (msg != null && msg != '') {
var f = function() {
var mm = m;
// JCC.log('SENT: [' + mm + ']');
so.send(mm);
};
setTimeout(f, 1);
}
};
Ok, so the problem is that normally writers do not block readers. This means that your first insert happens, and the second insert fires before the first one commits. This introduces a race condition in your application which introduces the problem you see.
Your best issue here is either to switch to serializable snapshot isolation or to do what you have done and do exception handling on the insert. One way or another you end up with additional exception handling that must be handled (if serializable, then a serialization failure exception may sometimes happen and you may have to wait for it).
In your case, despite the performance penalty of exception handling in plpgsql, you are best off to do things the way you are currently doing them because that avoids the locking issues and waiting for the transaction to complete.
Related
I created an SQS with default settings. I published two messages to it, and I would like to read them back in the same time. I tried it like this:
const sqsClient = new SQSClient({ region: REGION });
const params = {
AttributeNames: ["SentTimestamp"],
MaxNumberOfMessages: 5,
MessageAttributeNames: ["All"],
QueueUrl: queueURL,
WaitTimeSeconds: 5,
};
const data = await sqsClient.send(new ReceiveMessageCommand(params));
const messages = data.Messages ?? [];
console.log(messages.length);
Unfortunately only one message is returned, no matter what I provide in MaxNumberOfMessages. What can cause this? How is it possible to fix this issue?
I was able to find a similar question, but it has only one answer, refering to a 3rd party library.
A ReceiveMessageCommand does not guarantee that you will get exactly the number of messages specified for MaxNumberOfMessages. In fact the documentation says the following:
Short poll is the default behavior where a weighted random set of machines is sampled on a ReceiveMessage call. Thus, only the messages on the sampled machines are returned. If the number of messages in the queue is small (fewer than 1,000), you most likely get fewer messages than you requested per ReceiveMessage call. If the number of messages in the queue is extremely small, you might not receive any messages in a particular ReceiveMessage response. If this happens, repeat the request.
You must use long-polling to receive multiple messages. This is essentially setting the WaitTimeSeconds to a greater value (5 seconds should be enough).
And you must have a larger number of messages in the queue to be able to fetch multiple messages with one call.
To summarize:
SQS is a distributed system, each call will poll one machine only.
Messages are distributes on those machines, if you have a small number of messages, it might happen that you fetch only one message, or none.
Test your code with a larger set of sent messages and put your receiving call in loop.
Infra-Overview:
I have a setup where I am reading a set of messages from IBM MQ and processing those messages in k8 cluster env and sending it to the destination host.
Issue:
I observed that sometimes the flow of the messages is huge and before sending it to the destination host our pod gets failed and restarts, by this we are losing all the messages as we are following a read-and-delete approach from ibmmq example
Expected Solution:
I am looking for a solution where, until these messages are sent to the destination host, we don't lose the track of the messages.
What I tried:
We have a concept of unit of work in IBM MQ but since we can't expect a delay in reading and processing, I can't wait for a single message to get processed and then read the another message as it might have a major performance setback.
Code language:
NodeJs
As the comments suggest there are a number of ways to skin this cat, but you will need to use transactions.
As soon as you create the connection with the transaction option, the transaction scope begins. This gets closed and next transaction begins when you either commit or rollback.
So you should handle the messages in batches, that make sense to your application, and commit when the batch is complete. If your application is killed by k8s then all uncommitted read messages will get rolled back, via back out queue process to stop poison messages.
Section added to show sample code, and explanation of backout queues.
In your normal processing, if an app gets stopped before it has had time to process the message, you will want that message returned to the queue. So that the message is still available to be processed.
To enable this rollback you need to or in the MQC.MQPMO_SYNCPOINT into the get message options
gmo.Options |= MQC.MQGMO_SYNCPOINT
Then if all goes well, you can commit.
mq.Cmit(hConn, function(err) {
if (err) {
debug_warn('Error on commit', err);
} else {
debug_info('Commit was successful');
}
});
or rollback
mq.Back(hConn, function(err) {
if (err) {
debug_warn('Error on rollback', err);
} else {
debug_info('rollback was successful');
}
});
If you rollback, the message goes back to the queue. Which means it is also the next message that your app will read. This can generate a poison message loop. So you should also set up a backout queue with pass all context permissions for your app user and a backout threshold.
Say you set the threshold to 5. The message can be read 5 times, with rollback. Your app needs to check the threshold and decide that it is a poison message and move it off the queue.
To check the backout threshold (and the backout queue name) you can use the following code
// Remember to or in the Inquire option on the Open
openOptions |= MQC.MQOO_INQUIRE;
...
attrs = [ new mq.MQAttr(MQC.MQIA_BACKOUT_THRESHOLD),
new mq.MQAttr(MQC.MQCA_BACKOUT_REQ_Q_NAME) ];
mq.Inq(hObj, attrs, (err, selectors) => {
if (err) {
debug_warn('Error retrieving backout threshold', err);
} else {
debug_info('Attributes have been found');
selectors.forEach((s) => {
switch (s.selector) {
case MQC.MQIA_BACKOUT_THRESHOLD:
debug_info('Threshold is ', s.value);
break;
case MQC.MQCA_BACKOUT_REQ_Q_NAME:
debug_info('Backout queue is ', s.value);
break;
}
});
}
});
When getting the message your app can use mqmd.BackoutCount to check how often the message has been rolled back.
if (mqmd.BackoutCount >= threshold) {
...
}
What I have noticed, that if this is in the same application instance that is repeatedly calling rollback on the same message, then at the threshold a MQRC_HOBJ_ERROR error is thrown. Which your app can check for, and then discard the message.
If its a different app instance then it doesn't get the MQRC_HOBJ_ERROR error, so it can check the backout threshold and can discard the message, remembering to commit the discard action.
See https://github.com/ibm-messaging/mq-dev-patterns/tree/master/transactions/JMS/SE for more information.
As an alternative you could use keda - https://keda.sh - which works with k8s
to monitor your queue depth and scale according to the number of messages waiting to be processed, as opposed to CPU / memory consumption. That way you can scale up when there are lots of messages waiting to be processed, and slowly scale down then the queue becomes manageable. Here is a link to getting started - https://github.com/ibm-messaging/mq-dev-patterns/tree/master/Go-K8s - the example is for a Go app, but equally applies to Node.js
I'm experimenting with node and it's child_process module.
My goal is to create server which will run on maximum of 3 processes (1 main and optionally 2 children).
I'm aware that code below may be incorrect, but it displays interesting results.
const app = require ("express")();
const {fork} = require("child_process")
const maxChildrenRuning = 2
let childrenRunning = 0
app.get("/isprime", (req, res) => {
if(childrenRunning+1 <= maxChildrenRuning) {
childrenRunning+=1;
console.log(childrenRunning)
const childProcess = fork('./isprime.js');
childProcess.send({"number": parseInt(req.query.number)})
childProcess.on("message", message => {
console.log(message)
res.send(message)
childrenRunning-=1;
})
}
})
function isPrime(number) {
...
}
app.listen(8000, ()=>console.log("Listening on 8000") )
I'm launching 3 requests with 5*10^9'ish numbers.
After 30 seconds I receive 2 responses with correct results.
CPU stops doing hard work and goes idle
Surprisingly after next 1 minute 30 seconds 1 thread starts to proceed, still pending, 3rd request and finishes after next 30 seconds with correct answer. Console log displayed below:
> node index.js
Listening on 8000
1
2
{ number: 5000000029, isPrime: true, time: 32471 }
{ number: 5000000039, isPrime: true, time: 32557 }
1
{ number: 5000000063, isPrime: true, time: 32251 }
Either express listens and checks pending requests once for a while or my browser sends actual requests every x time while pending. Can anybody explain what is happening here and why? How can I correctly achieve my goal?
The way your server code is written, if you receive a /isprime request and two child processes are already running, your request handler for /isprime does nothing. It never sends any response. You don't pass that first if test and then nothing happens afterwards. So, that request will just sit there with the client waiting for a response. Depending upon the client, it will probably eventually time out as a dead/inactive request and the client will shut it down.
Some clients (like browsers) may assume that something just got lost in the network and they may retry the request by sending it again. It would be my guess that this is what is happening in your case. The browser eventually times out and then resends the request. By the time it retries, there are less than two child processes running so it gets processed on the retry.
You could verify that the browser is retrying automatically by going to the network tab in the Chrome debugger and watching exactly what the browser sends to your server and watch that third request, see it timeout and see if it is the browser retrying the request.
Note, this code seems to be only partially implemented because you initially start two child processes, but you don't reuse those child processes. Once they finish and you decrement maxChildrenRuning, your code will then start another child process. Probably what you really want to do is to keep track of the two child processes you started and when one finishes, add it to an array of "available child processes" so when a new request comes in, you can just use an existing child process that is already started, but idle.
You also need to either queue incoming requests when all the child processes are full or you need to send some sort of error response to the http request. Never sending an http response to an incoming request is a poor design that just leads to great inefficiencies (connections hanging around much longer than needed that never actually accomplish anything).
I have created a Redis cluster with 30 instances (15 masters/ 15 nodes). With python code i connected to these instances, i found the masters and then i wanted to add some keys to them.
def settomasters(port, host):
r = redis.Redis( host=host, port=port )
r.set("key"+port,"value")
Error:
redis.exceptions.ResponseError: MOVED 12539 127.0.0.1:30012
If i try to set key from redis-cli -c -p portofmyinstance sometimes i get a redirection message that tells where the keys stored.
I know that in case of get requests for example, a smart client is needed in order to redirect the requests to the correct node (the node that holds the key) otherwise a moved error occurs. Is it the same situation? I need to catch the redis.exceptions.ResponseError and try to set again?
while True:
try:
r.set("key","value")
break
except:
print "error"
pass
My first try was above code but without solution. The set operation never succeeds.
On the other hand below code in javascript does not throw an error and i cannot figure the reason:
var redis = require('redis-stream'),
client = new redis(30001, '127.0.0.1');
// Open stream
var stream = client.stream();
// Example of setting 200 records
for(var record = 0; record <200; record++) {
var command = ['set', 'qwerty' + record, 'QWERTYUIOP'];
stream.redis.write( redis.parse(command) );
}
stream.on('close', function () {
console.log('Completed!');
});
// Close the stream after batch insert
stream.end();
Any help will be appreciated, thanks.
with a redis cluster you can use the normal redis client only if you "find for the certain key the slot that belongs and then the slots that each master serves. With this information i can set keys to the correct node without moved redirection errors." as #Antonis said. Otherwise you need http://redis-py-cluster.readthedocs.io/en/master/
I have a WebSocket connection set up for a basic web chat server.
Now, whenever a message is received, it is sent to a function which outputs it on the screen.
socket.onmessage = function(msg){output(msg);}
However, there are certain technical commands which the user can send to the server through the connection which elicit a technical response, not meant to be output on the screen.
How can I grab the server response which immediately follows one of these technical messages?
Do I put a separate socket.onmessage right after the block of code which sends the technical message? I imagine that would take all future messages. I just want the next one.
Ideas?
WebSockets is asynchronous so trying to get the 'next' message received is not the right solution. There can be multiple message in flight in both directions at the same time. Also, most actions in Javascript are triggered by asynchronous events (timeout firing, or user clicking on something) which means you don't have synchronous control over when sends will happen. Also, the onmessage handler is a persistent setting: once it is set it receives all messages until it is unset. You need to have some sort of way of distinguishing control messages from data messages in the message it self. And if you need to correlate responses with sent messages then you will also need some kind of message sequence number (or other form of unique message ID).
For example, this sends a control message to the server and has a message handler which can distinguish between control and message and other messages:
var nextSeqNum = 0;
...
msg = {id: nextSeqNum, mtype: "control", data: "some data"};
waitForMsg = nextSeqNum;
nextSeqNum += 1;
ws.send(JSON.stringify(msg));
...
ws.onmessage = function (e) {
msg = JSON.parse(e.data);
if (msg.mtype === "control") {
if (msg.id === waitForMsg) {
// We got a response to our message
} else {
// We got an async control message from the server
}
} else {
output(msg.data);
}
};
You can packetise the data, I mean, by a special character/s, form a string like this :
"DataNotToBeShown"+"$$"+"DataToBeShown"; //if $$ is separating character
And then, you can split the string in javascript like this :
var recv=msg.data.split('$$');
So, the data not be shown is in recv[0] and data to be shown, in recv[1]. Then use however you want.