CubeJS PostgresDriver fails authentication but it works in terminal - javascript

I can connect to my database just fine by entering psql -U <db_user> -W -h localhost -p <db_port> -d <db_name>. However, when I set up that same connection in a Cubejs backend, queries like http://localhost:4000/cubejs-api/v1/load?query=... return {"error": "Error: password authentication failed for user \"<db_user>\""}.
The database is actually connected via SSH tunnel. I've connected a truly local db to Cubejs before, so I suspect this might be causing trouble.
I attempt the connection like so
new PostgresDriver({
database: "<db_name>",
host: "localhost",
user: "<db_user>",
password: "<db_password>"
})
3. The logs from the Postgres database only show stuff that looks unrelated to my connection attempts:
2020-11-18 23:42:25 UTC::#:[6677]:LOG: checkpoint starting: time
2020-11-18 23:42:25 UTC::#:[6677]:LOG: checkpoint complete: wrote 1 buffers (0.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.101 s, sync=0.001 s, total=0.115 s; sync files=1, longest=0.001 s, average=0.001 s; distance=65536 kB, estimate=65623 kB

Related

Http response at 400 or 500 level

I'm novice in gRPC. My program is written with ‍‍nuxtjs and is a simple login page that receives the username and password and sends it to the server using gRPC.
Everything is fine when I submit a request with BloomRPC. But when using the browser, the request is not sent to the server.
My auth class is as follow:
// auth.js
export default class {
constructor(vars) {
this.tokenKey = vars.tokenKey
this.proto = vars.proto
this.client = new vars.proto.AuthenticationClient('http://127.0.0.1:50051', null, null)
}
async loginRequest(user) {
const request = new this.proto.LoginRequest()
request.setUsername(user.username.trim().toLowerCase())
request.setPassword(user.password.trim())
return await this.client.login(request, {})
}
}
This error is shown when requesting to the server with the browser, whether the server is up or not.
net ERROR_CONNECTION_REFUSED
message: 'Http response at 400 or 500 level'
...
Chrome Screenshot:
Do I have to do a specific configuration?
I just want a hint for configuring.
UPDATE:
This link says that you should use Envoy. But why do we need it? And how do I configure it?
BloomRPC screenshot:
As you can see on the right side of the image, the answer is returned correctly.
Where was the problem?
The problem is that requests do not reach the server. So it does not matter if the server is up or down.
> Short Answer:
I needed a proxy to receive requests from the server. So I used ‍envoy proxy. In this way, nginx received the request from the browser and then sent it to a port (for example 5000). On the other hand, envoy listens to port 5000 and then sends the request to the server running on port 50051.
This is how I designed the tracking of a gRPC connection.
> Long Answer:
1. Generate html/css/js file with building nuxt project.
I put my website files in the root/site‍ folder.
2. envoy config:
I config envoy.yaml file as follows according to what the documentions grpc-web said.
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 5000 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: sample_cluster
max_grpc_timeout: 0s
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: grpc-status,grpc-message
http_filters:
- name: envoy.filters.http.grpc_web
- name: envoy.filters.http.cors
- name: envoy.filters.http.router
clusters:
- name: sample_cluster
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
hosts: [{ socket_address: { address: 0.0.0.0, port_value: 50051 }}]
This yaml file has two parts listeners and clusters. In the first part(listeners), we specify which port and address for listening(For example, here is address 0.0.0.0 and port 5000), and in the second part(clusters), we tell it where to send the requests(here is address 0.0.0.0 and port 50051). To use this file, we give it to Docker. So I create a Dockerfile.
# Dockerfile
FROM envoyproxy/envoy:v1.14.3
COPY ./envoy.yaml /etc/envoy/envoy.yaml
EXPOSE 5000
CMD /usr/local/bin/envoy -c /etc/envoy/envoy.yaml
To build the container, I use the following commands inside Dockerfile folder and then run it.
docker build -t my-grpc-container:1.0.0 .
docker run -d --net=host my-grpc-container:1.0.0
By executing the above commands, envoy is up and listens to port
5000 and sends requests to port 500051.
For more information about envoy read this nice blog:
The role of Envoy:
Envoy translates the HTTP/1.1 calls produced by the client into HTTP/2 calls that can be handled by those services
(gRPC uses HTTP/2 for transport).
3. nginx config:
I went to the /etc/nginx/sites-enabled/ folder and open default file and set the path of my website files(see section 1) as follows:
# server block
root /root/site;
Then told nginx that if a request in the url started with /rpc/, send it to 5000 port(where envoy is listening):
# server block
location /rpc/ {
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:5000/;
}
Then I restart nginx:
sudo systemctl restart nginx
By doing this part, nginx is up and send requests that start with
/rpc/ to port 5000, where envoy is ready to receive.
In summary:
The server is running on port 50051.
‍nginx sends requests to port 5000.
envoy, as an interface between the server and nginx, receives
requests from port 5000 and sends them to port 50051.
Done
According to chrome screenshot you trying to access to 5005 port in JS, but according to BloomRPC, screenshot your service listening 50051.

mosquitto+mqtt.js got "Connection refused: Not authorized"

I built mosquitto on CentOS7 and a node.js client based on mqtt.js,installing with
yum install mosquitto mosquitto-clients
The local test
> mosquitto_sub -h localhost -t test
> mosquitto_pub -h localhost -t test -m "hello world"
works fine, but when I ran:
var mqtt = require('mqtt')
var client = mqtt.connect('mqtt://192.168.1.70')
client.on('connect', function () {
client.subscribe('presence')
client.publish('presence', 'Hello mqtt')
})
client.on('message', function (topic, message) {
// message is Buffer
console.log(message.toString())
client.end()
})
I got Error: Connection refused: Not authorized
The mosquitto.conf is like:
pid_file /var/run/mosquitto.pid
persistence true
persistence_location /var/lib/mosquitto/
log_dest file /var/log/mosquitto/mosquitto.log
allow_anonymous true
and I use systemctl restart mosquitto to restart it several time, which doesn't help. The firewall is down and log file stays empty.
A screenshot on status:
Can anyone help please?
UPDATE:
It turns out that the mosquitto service is somehow broken as the status shows Active: active (exited).
I use mosquitto -p 1884 -v cmd to run another mosquitto process on port 1884, it works fine. Then I try to reload the conf using
> /etc/init.d/mosquitto reload. It gives me
Reloading mosquitto configuration (via systemctl): Job for mosquitto.service invalid.
[FAILED]
So there IS something wrong with mosquitto service.
Not a final solution but I manage to fix this by remove-reboot-install process, the status went green as follow:
SOLUTION
I managed to find out the reason it doesn't work. I've installed rabbitmq on my server, it uses its "rabbitmq_mqtt" which consumes port 1883. Reassigning a port will solve this problem.
I managed to find out the reason. I've installed rabbitmq on my server, it uses its "rabbitmq_mqtt" which consumes port 1883. Reassigning a port will solve this problem. The problem is simple, but yeah, the CLI should have given me more information.
You need to add the authorize information to mqtt connect method.Just like this.
var client=mqtt.connect("ws://192.168.1.1", {
username: "yourUsername",
password: "yourPassword"
}
Add the Authorization details for the client to connect
var mqtt = require('mqtt')
var client = mqtt.connect('mqtt://192.168.1.70', {
username: '<username>',
password: '<password>'
});
client.on('connect', function () {
client.subscribe('presence')
client.publish('presence', 'Hello mqtt')
})
client.on('message', function (topic, message) {
// message is Buffer
console.log(message.toString())
client.end()
})

Postgresql on OpenShift 2, using node.js

I have an app that uses Node.js and Postgresql on OpenShift, I can connect locally to the database and make queries, but I can't get it to work on the openshift server. When I push to server, I get this error:
Waiting for application port (8080) become available ...
Application 'myapp' failed to start (port 8080 not available)
But Im using the port 8080...
My openshift ports are:
Service --- Local --------------- OpenShift
node ------ 127.0.0.1:8080 => 127.8.120.129:8080
postgresql 127.0.0.1:5432 => 127.8.120.130:5432
And here I write the important code line.
First, the server.js:
...
var db = require('./postgresql/database.js');
db.sync();
...
var server_port = process.env.OPENSHIFT_NODEJS_PORT || 8080
var server_ip_address = process.env.OPENSHIFT_NODEJS_IP || '127.0.0.1'
server.listen(server_port, server_ip_address, function () {});
...
And database.js:
var Sequelize = require('sequelize');
var bd_url = process.env.OPENSHIFT_POSTGRESQL_DB_URL || 'postgres://'user':'pass'#127.0.0.1:5432/sw'
var sequelize = new Sequelize(bd_url, {
dialect: 'postgres',
dialectOptions: {}
});
module.exports = sequelize;
Does anyone know what can fail?
Thanks!
OpenShift provides a default web server (written in Ruby) on almost every container/cartridge you create.
Every service is started using the "start" service hook, located at:
$OPENSHIFT_REPO_DIR/.openshift/action_hooks/start
You may find a line like this one:
[]\> nohup $OPENSHIFT_REPO_DIR/diy/testrubyserver.rb $OPENSHIFT_DIY_IP $OPENSHIFT_REPO_DIR/diy |& /usr/bin/logshifter -tag diy &
In order to verify which application is using port 8080, you can execute "oo-lists-ports" command.
This command is just an alias for "lsof" command.
Execute it without any arguments and you'll obtain the application that it's locking your port 8080 (in my case):
[]\> oo-lists-ports
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
node 88451 1027 10u IPv4 392176 0t0 TCP 127.2.1.129:8080 (LISTEN)
[]\>
With the above information (PID), you just need to kill the related process:
(in my case)
[]\> ps -ef |grep 88451
1027 62829 61960 0 08:33 pts/0 00:00:00 grep 88451
1027 88451 1 0 Jun21 ? 00:00:16 node faceBot.js
[]\> kill -9 88451
After killing the process that is locking your port 8080 you will be able to run your Node JS stack on that port.
Regards

Can't access to socket.io.js on Raspberry Pi with Lighttpd [Node.JS & Socket.IO]

I'm completely new to Node.JS and Socket.IO since yesterday.
I try to make Node.JS and Socket.IO work on my Raspberry Pi but it doesn't seem to. I can't access to <myip>:1337/socket.io/socket.io.js.
I have followed this tutorial so my Lighttpd.conf file seems like so:
$HTTP["host"] == "<myURLtomywebsite>" {
proxy.server = (" " => ((
"host" => "<myIP>",
"port" => 1337)
)
)
My server.js look like so:
var http = require('http');
httpServer = http.createServer(function(req, res) {
res.end('Hello World!');
});
httpServer.listen(1337);
var io = require('socket.io').listen(httpServer);
var clients = 0;
io.sockets.on('connection', function(socket) {
++clients;
socket.on('disconnect', function(data) {
--clients;
io.sockets.emit('disusr', clients);
});
io.sockets.emit('newusr', clients);
});
And I bind to the disusr and newusr events in my client.js to display the number of connected users in a div.
Everything looks fine on my localhost but, in production environment, I cannot link to my socket.io.js file on the 1337 port. To be honest, I'm not even sure what address to use? (URL of my website appended with :1337, localhost, some other address I would have created?)
Any help would be much appreciated. Thanks!
I resolved my problem!
I linked socket.io.js like so : <script type="text/javascript" src="/socket.io/socket.io.js"></script>
I used HAProxy instead of Lighttpd mod_proxy as specified in this question
Here is my conf file (amend <...> per your configuration):
# this config needs haproxy-1.1.28 or haproxy-1.2.1
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
uid 99
gid 99
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option http-use-proxy-header
option redispatch
option http-server-close
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
frontend public
bind *:80
acl is_example hdr_end(host) -i <URL.toyourwebsite.com>
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket path_beg -i /websockets
use_backend ws if is_websocket is_example
default_backend www
backend ws
balance roundrobin
option forwardfor # This sets X-Forwarded-For
timeout queue 5000
timeout server 86400000
timeout connect 86400000
server apiserver localhost:<PORT> weight 1 maxconn 1024 check
And I made Lighttpd listened to the 8080 port (otherwise HAProxy wouldn't start).
Remind there is no need to use mod_proxy as it is known to be not compatible with websockets. Use HAProxy instead.

Using MySQL on SilkJS Under OSX Lion

I'd like to use MySQL from SilkJS under OSX Lion, but can't find the exact steps to get it to work. I followed the general instructions found here:
Perform the SilkJS OSX Lion Quick Install
Install MySQL
Load MySQL from SilkJS REPL
Download/configure:
$ curl http://silkjs.org/install-osx.sh | sh
$ sudo port install mysql5-server +mysql5
$ sudo -u _mysql mysql_install_db5
$ sudo port load mysql5-server
Starting the SilkJS REPL, I ran:
$ silkjs
SilkJS> var MySQL = require('MySQL');
undefined
SilkJS> var SQL = new MySQL();
undefined
SilkJS> SQL.connect();
interpreter line 19 (main):
(object) :
SilkJS> SQL.startTransaction();
Caught Signal 11 for process: 77312
How can I run a simple MySQL query from SilkJS under Lion?
Fresh web install of silkjs on OSX Lion:
mschwartz#dionysus:~/src$ curl http://silkjs.org/install-osx.sh | sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1666 100 1666 0 0 3553 0 --:--:-- --:--:-- --:--:-- 79333
Installing SilkJS in /usr/local/silkjs
You may be asked to enter your password.
This is so the /usr/local directory can be made writable for the install.
Password:
Downloading SilkJS
######################################################################## 100.0%
Installation complete.
You need to add /usr/local/bin to your PATH environment variable.
This can be done by adding this line to your .bashrc file:
export PATH=/usr/local/bin:$PATH
You can run SilkJS with the following command:
$ silkjs
You can run the SilkJS HTTP Server with the following command:
$ httpd-silk.js yourapp.js
For instructions on setting up a WWW site for httpd-silk.js, see
http://silkjs.org/http-server/
Now to see it works:
mschwartz#dionysus:~/src$ silkjs
SilkJS> console.dir(require('MySQL'));
undefined line 1 (eval):
function () {
this.queryCount = 0;
this.handle = null;
}
undefined
SilkJS>
Note the console.dir() shows a function. The rest of the methods defined as properties of the function (constructor). So you must construct a MySQL() object:
SilkJS> var mysql = require('MySQL');
SilkJS> m = new mysql();
SilkJS> console.log(m.connect)
function (host, user, passwd, db) {
if (!this.handle) {
host = host || Config.mysql.host;
user = user || Config.mysql.user;
passwd = passwd !== undefined ? passwd : Config.mysql.passwd;
db = db || Config.mysql.db;
this.handle = mysql.connect(host, user, passwd, db);
}
}
undefined
SilkJS>
Now, in order to connect, you have to pass in host, user, passwd, and db. Or if you're running in the httpd environment, you set Config.mysql, something like this:
Config.mysql = {
host: 'localhost', // or some other host running MySQL server
user: 'someuser', // username to authenticate in MySQL server
passwd: 'whatever', // password of username in MySQL server
db: 'database_name" // name of database in MySQL server to connect to
};
Then the HTTP server will create a global SQL object available from request to request. This is automatic, you can just use SQL.getDataRows(), SQL.getScalar(), and so on.

Categories