I have created a Redis cluster with 30 instances (15 masters/ 15 nodes). With python code i connected to these instances, i found the masters and then i wanted to add some keys to them.
def settomasters(port, host):
r = redis.Redis( host=host, port=port )
r.set("key"+port,"value")
Error:
redis.exceptions.ResponseError: MOVED 12539 127.0.0.1:30012
If i try to set key from redis-cli -c -p portofmyinstance sometimes i get a redirection message that tells where the keys stored.
I know that in case of get requests for example, a smart client is needed in order to redirect the requests to the correct node (the node that holds the key) otherwise a moved error occurs. Is it the same situation? I need to catch the redis.exceptions.ResponseError and try to set again?
while True:
try:
r.set("key","value")
break
except:
print "error"
pass
My first try was above code but without solution. The set operation never succeeds.
On the other hand below code in javascript does not throw an error and i cannot figure the reason:
var redis = require('redis-stream'),
client = new redis(30001, '127.0.0.1');
// Open stream
var stream = client.stream();
// Example of setting 200 records
for(var record = 0; record <200; record++) {
var command = ['set', 'qwerty' + record, 'QWERTYUIOP'];
stream.redis.write( redis.parse(command) );
}
stream.on('close', function () {
console.log('Completed!');
});
// Close the stream after batch insert
stream.end();
Any help will be appreciated, thanks.
with a redis cluster you can use the normal redis client only if you "find for the certain key the slot that belongs and then the slots that each master serves. With this information i can set keys to the correct node without moved redirection errors." as #Antonis said. Otherwise you need http://redis-py-cluster.readthedocs.io/en/master/
Related
I have this code, whenever the user goes to this certain endpoint, it is supposed to emit a message to a python client, which then gets some data and then returns it back as a callback so I can show the users the data.
This is the server-side code (NodeJS):
app.get('/hueapi/lights', verifyToken, (req,res) => {
const bridgeIDFromApp = req.header('bridgeID');
const socketID = socketRefDic[bridgeIDFromApp]['socketID'];
io.to(socketID).emit('getAllLights', 'getAllLights', function(data){
res.send(data); // The callback function that shows the data given by the python client
});
});
It just sends a simple 'getAllLights' message to the python client in question and then runs the function which provides the data.
This is the client-side code (python):
def getAllLights(data):
lightData = requests.get('http://localhost:4000/lights/')
return lightData
Am I doing the call back wrong or? I just want to send the data straight back to the user after retrieving it.
EDIT:
I am now using io.to(...).emit(...) instead of io.send(...).emit(...) yet I am still getting the error saying I'm broadcasting, yet I'm not, am I?
I don't think that the ack method will work for you unless it is implemented on the python side as well. The reason that you are still getting the broadcasting error is because io.to does not return a socket it returns a room which does broadcast.
Probably easier to just have a separate endpoint on the client side. Which your python code doesn't even attempt from what I see. The python code should still be able to write to the socket.
So to implement your own ack function you would simply write your ack message to the socket. If you need it to be statefully namespaced then you would have to include an address for the python code to reference with your getAllLights message.
Node:
app.get('/hueapi/lights', verifyToken, (req,res) => {
const bridgeIDFromApp = req.header('bridgeID');
const socketID = socketRefDic[bridgeIDFromApp]['socketID'];
const uniqAck = "some unique endpoint path";
const socket = getSocketByID(socketID);
socket.on(uniqAck, (data) => res.send);
socket.emit('getAllLights', 'getAllLights:'+uniqAck);
});
Python:
def getAllLights(data):
lightData = requests.get('http://localhost:4000/lights/');
return (lightData, split(data, ":")[1]); // split if its not already done by this point.
// capture 'value' from getAllLights when it is called...
socket.emit(value[1], value[0]);
Goal: To have a Node.js server where only one connection is active at a time.
I can temporarily remove the connection event listener on the server, or only set it up once in the first place by calling once instead of on, but then any connection that gets made while there is no connection event listener seems to get lost. From strace, I can see that Node is still accept(2)ing on the socket. Is it possible to get it to not do that, so that the kernel will instead queue up all incoming request until the server is ready to accept them again (or the backlog configured in listen(2) is exceeded)?
Example code that doesn’t work as I want it to:
#!/usr/bin/node
const net = require("net");
const server = net.createServer();
function onConnection(socket) {
socket.on("close", () => server.once("connection", onConnection));
let count = 0;
socket.on("data", (buffer) => {
count += buffer.length;
if (count >= 16) {
socket.end();
}
console.log("read " + count + " bytes total on this connection");
});
}
server.once("connection", onConnection);
server.listen(8080);
Connect to localhost, port 8080, with the agent of your choice (nc, socat, telnet, …).
Send less than 16 bytes, and witness the server logging to the terminal.
Without killing the first agent, connect a second time in another terminal. Try to send any number of bytes – the server will not log anything.
Send more bytes on the first connection, so that the total number of bytes sent there exceeds 16. The server will close this connection (and again log this to the console).
Send yet more bytes on the second connection. Nothing will happen.
I would like the second connection to block until the first one is over, and then to be handled normally. Is this possible?
.. so that the kernel will instead queue up all incoming request until the server is ready to accept them again (or the backlog configured in listen(2) is exceeded)?
...
I would like the second connection to block until the first one is over, and then to be handled normally. Is this possible?
Unfortunately, it is not possible without catching the connection events that are sent and managing the accepted connections in your application rather than with the OS backlog. node calls libuv with an OnConnection callback that will try to accept all connections and make them available in the JS context.
Hi guys I have a problem that i don't really have idea how to solve. it's also a bit strange :/
Basically I have created this Lambda function to connect to a mysql DB using the node package 'mysql'.
If i run the function from command line on my pc using the command 'sls function run function1' and make different queries everything is fine.
But when I call the function from a web browser using the link, I have to refresh the page 2 times to get the right result because at the first refresh the server respond with the old result.
I have noticed that from the command line I always have different thredID while from webbrowser is always the same.
Also I don't close the connection in the lambda function code because everything is fine if i run the function from command line but from browser I can only make 2 queries and then I get a message that say that I cannot use a closed connection.
So it seems like Lambda store the old query result when I call it from web browser.
Obviously I'm making same stupid mistake but I don't know how to solve it.
Does anyone have an idea?
Thanks :)
'use strict';
//npm packages
var mysql=require('mysql');
var deasync = require('deasync');
//variables
var goNext=false; //use to synchronize deasync
var error=false; //it becomes TRUE if an error occured during the connection to the DB
var dataColumnTable; //the data thet you extract from the query to the DB
var errorMessage;
//----------------------------------------------------------------------------------------------------------------
//always same credentials
var connection = mysql.createConnection({
host : 'hostAddress',
user : 'Puser',
password : 'password',
port : '3306',
database : 'database1',
});
//----------------------------------------------------------------------------------------------------------------
module.exports.handler = function(event, context) {
var Email=event.email;
connection.query('SELECT City, Address FROM Person WHERE E_Mail=?', Email, function(err, rows) {
if(err){
console.log("Cannot connect to DB");
console.log(err);
error=true;
errorMessage=err;
}
else{
console.log("data from column acquired!");
dataColumnTable=rows;
}
//connection.end(function(err) {
// connection.destroy();
//});
//console.log("Connection closed!");
goNext=true;
});
require('deasync').loopWhile(function(){return goNext!=true;});
//----------------------------------------------------------------------------------------------------------------
if(error==true)
return callback('Error '+ errorMessage);
else
return callback(null,dataColumnTable); //return a JsonFile
//fine headler
};
Disclaimer: I'm not very familiar with AWS and/or AWS Lambda.
http://docs.aws.amazon.com/lambda/latest/dg/programming-model-v2.html states (emphasis mine):
Your Lambda function code must be written in a stateless style, and have no affinity with the underlying compute infrastructure. Your code should expect local file system access, child processes, and similar artifacts to be limited to the lifetime of the request. Persistent state should be stored in Amazon S3, Amazon DynamoDB, or another cloud storage service. Requiring functions to be stateless enables AWS Lambda to launch as many copies of a function as needed to scale to the incoming rate of events and requests. These functions may not always run on the same compute instance from request to request, and a given instance of your Lambda function may be used more than once by AWS Lambda.
Opening a connection and storing it in a variable outside your handler function is state. The connection will likely be closed between requests or even before your first request. Your lambda function may be reused (hence identical thread ids).
My assumption would be (and an attempt to solve this problem), that you need to create the connection on every request (i.e., inside your handler) and may not expect any value be as initialized or as on last request. (except for constants probably).
I have a node application handling some ZeroMQ events coming from another application utilizing the Node-ZMQ bindings found here: https://github.com/JustinTulloss/zeromq.node
The issue I am running into is one of the operations from an event takes a long time to process and this appears to be blocking any other event from being processed during this time. Although the application is not currently clustered, doing so would only afford a few more threads and doesn't really solve the issue. I am wondering if there is a way of allowing for these async calls to not block other incoming requests while they process, and how I might go about implementing them.
Here is a highly condensed/contrived code example of what I am doing currently:
var zmq = require('zmq');
var zmqResponder = zmq.socket('rep');
var Client = require('node-rest-client').Client;
var client = new Client();
zmqResponder.on('message', function (msg, data) {
var parsed = JSON.parse(msg);
logging.info('ZMQ Request received: ' + parsed.event);
switch (parsed.event) {
case 'create':
//Typically short running process, not an issue
case 'update':
//Long running process this is the issue
serverRequest().then(function(response){
zmqResponder.send(JSON.stringify(response));
});
}
});
function serverRequest(){
var deferred = Q.defer();
client.get(function (data, response) {
if (response.statusCode !== 200) {
deferred.reject(data.data);
} else {
deferred.resolve(data.data);
}
});
return deferred.promise;
}
EDIT** Here's a gist of the code: https://gist.github.com/battlecow/cd0c2233e9f197ec0049
I think, through the comment thread, I've identified your issue. REQ/REP has a strict synchronous message order guarantee... You must receive-send-receive-send-etc. REQ must start with send and REP must start with receive. So, you're only processing one message at a time because the socket types you've chosen enforce that.
If you were using a different, non-event-driven language, you'd likely get an error telling you what you'd done wrong when you tried to send or receive twice in a row, but node lets you do it and just queues the subsequent messages until it's their turn in the message order.
You want to change REQ/REP to DEALER/ROUTER and it'll work the way you expect. You'll have to change your logic slightly for the ROUTER socket to get it to send appropriately, but everything else should work the same.
Rough example code, using the relevant portions of the posted gist:
var zmqResponder = zmq.socket('router');
zmqResponder.on('message', function (msg, data) {
var peer_id = msg[0];
var parsed = JSON.parse(msg[1]);
switch (parsed.event) {
case 'create':
// build parsedResponse, then...
zmqResponder.send([peer_id, JSON.stringify(parsedResponse)]);
break;
}
});
zmqResponder.bind('tcp://*:5668', function (err) {
if (err) {
logging.error(err);
} else {
logging.info("ZMQ awaiting orders on port 5668");
}
});
... you need to grab the peer_id (or whatever you want to call it, in ZMQ nomenclature it's the socket ID of the socket you're sending from, think of it as an "address" of sorts) from the first frame of the message you receive, and then use send it as the first frame of the message you send back.
By the way, I just noticed in your gist you are both connect()-ing and bind()-ing on the same socket (zmq.js lines 52 & 143, respectively). Don't do that. Inferring from other clues, you just want to bind() on this side of the process.
Strange situation.
I try to start chat application.
I use postgresql 9.3 and tomcat as web server.
What is happens when one browser sending message another:
1 - Broswer A send message to server (tomcat)
2 - Tomcat put msg into database and get his id
INSERT INTO messages VALUES('first message') returning into MSGID id
3 - Tomcat resend message to Browser B (websocket recipient)
4 - Browser B send system answer: MSGID_READED
5 - Tomcat update database message
UPDATE messages SET readtime = now() WHERE id = MSGID
All works, but sometimes at point 5 update can't find message by MSGID...
Very strange, coz at point 2 I getting message record ID, but at 5, not.
May postgresql write slowly and this record not allow (not visible) from parallel db connection?
UPDATE
I found solution for me, just put insert inside begin/exception/end block.
BEGIN
INSERT INTO messages (...)
VALUES (...)
RETURNING id INTO MSGID;
EXCEPTION
WHEN unique_violation THEN
-- nothing
END;
UPDATE 2
In detail tests above changes with BEGIN block has no effects.
Solution in Javascript! I sent websocket messages from other thread and problem solved!
// WebSocket send message function
// Part of code. so is a web socket
send = function(msg) {
if (msg != null && msg != '') {
var f = function() {
var mm = m;
// JCC.log('SENT: [' + mm + ']');
so.send(mm);
};
setTimeout(f, 1);
}
};
Ok, so the problem is that normally writers do not block readers. This means that your first insert happens, and the second insert fires before the first one commits. This introduces a race condition in your application which introduces the problem you see.
Your best issue here is either to switch to serializable snapshot isolation or to do what you have done and do exception handling on the insert. One way or another you end up with additional exception handling that must be handled (if serializable, then a serialization failure exception may sometimes happen and you may have to wait for it).
In your case, despite the performance penalty of exception handling in plpgsql, you are best off to do things the way you are currently doing them because that avoids the locking issues and waiting for the transaction to complete.