error: MongoError: Authentication failed. I am using docker and mongoose - javascript

I try to connect to mongoDB through docker-compose but i get the same error over and over again although i tried all the solutions from the net. Thank you.
I have my docker-compose.yml as
# Use root/example as user/password credentials
version: '3.1'
services:
mongo:
image: mongo
restart: always
container_name: mongo
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
MONGO_INITDB_DATABASE: test
MONGO_USERNAME: admin
MONGO_PASSWORD: example
volumes:
- ./data:/data/db
- ./mongo-entrypoint.sh:/docker-entrypoint-initdb.d/mongo-init.sh:ro
command: mongod
mongo-express:
image: mongo-express
restart: always
ports:
- 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: example
And I have such a shell script
mongo -- "$MONGO_INITDB_DATABASE" <<EOF
db.createUser({
user: "$MONGO_USERNAME",
pwd: "$MONGO_PASSWORD",
roles: [
{ role: 'readWrite', db: "$MONGO_INITDB_DATABASE" }
]
})
EOF
And I try to connect to database by:
mongoose
.connect("mongodb://admin:example#localhost:27017/test")
.then(() => console.log("connected"))
.catch((e) => console.log("error:", e));
In Linux my friend can connect with the same code pieces but I am getting this error :
running on 3000
error: MongoError: Authentication failed.
at MessageStream.messageHandler (C:\Users\kamad\Desktop\3-2\cs308\proje\backend\node_modules\mongodb\lib\cmap\connection.js:268:20)
at MessageStream.emit (events.js:315:20)
at processIncomingData (C:\Users\kamad\Desktop\3-2\cs308\proje\backend\node_modules\mongodb\lib\cmap\message_stream.js:144:12)
at MessageStream._write (C:\Users\kamad\Desktop\3-2\cs308\proje\backend\node_modules\mongodb\lib\cmap\message_stream.js:42:5)
at writeOrBuffer (internal/streams/writable.js:358:12)
at MessageStream.Writable.write (internal/streams/writable.js:303:10)
at Socket.ondata (internal/streams/readable.js:719:22)
at Socket.emit (events.js:315:20)
at addChunk (internal/streams/readable.js:309:12)
at readableAddChunk (internal/streams/readable.js:284:9)
at Socket.Readable.push (internal/streams/readable.js:223:10)
at TCP.onStreamRead (internal/stream_base_commons.js:188:23) {
ok: 0,
code: 18,
codeName: 'AuthenticationFailed'
}

I solved this just by changing the port from 27017 to 27018. Unfortunately, another app of mine was using that port.

There is my solved used :
docker-compose -f stack.yml up --build --force-recreate --renew-anon-volumes -d
stack.yml :
version: "3.5"
services:
mongo:
image: mongo:latest
container_name: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: admin
ports:
- "0.0.0.0:27017:27017"
networks:
- MONGO
volumes:
- MONGO_DATA:/Users/lirui/docker/data/mongo/db
- MONGO_CONFIG:/Users/lirui/docker/data/mongo/configdb
mongo-express:
image: mongo-express:latest
container_name: mongo-express
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: admin
ME_CONFIG_MONGODB_ADMINPASSWORD: admin
ME_CONFIG_MONGODB_SERVER: mongo
ME_CONFIG_MONGODB_PORT: "27017"
ports:
- "0.0.0.0:8088:8081"
networks:
- MONGO
depends_on:
- mongo
networks:
MONGO:
name: MONGO
volumes:
MONGO_DATA:
name: MONGO_DATA
MONGO_CONFIG:
name: MONGO_CONFIG
Replace path with your compute absolute path.
And DO NOT use 'root' to authentication.

use mongosh
shell script can use:
mongosh --port 27017 --authenticationDatabase -u "$MONGO_INITDB_ROOT_USERNAME" -p "$MONGO_INITDB_ROOT_PASSWORD" <<EOF
use admin
db.createUser({
user: process.env.MONGO_USERNAME ,
pwd: process.env.MONGO_PASSWORD ,
roles: [{
role: 'readWrite',
db: process.env.MONGO_INITDB_DATABASE }]
})
EOF

Related

Vuestorefront can't communicate with Prestashop in DockerFile

Have a problem with url middleware and prestashop integration. Everything works fine locally, but after moving the frontend (vsf/nuxt) and backends(prestashop) into docker containers the frontend cannot communicate with the prestashop API.
middleware.config.js
module.exports = {
integrations: {
prestashop: {
location: '#vue-storefront/prestashop-api/server',
configuration: {
api: {
url: 'http://lgm-prestashop'
}
}
}
}
};
docker-compose file:
version: '3.9'
services:
lgm-front:
build: .
container_name: lgm-front
ports:
- 3010:3000
networks:
- lgm-prestashop
depends_on:
- lgm-prestashop
volumes:
- ./packages:/var/www/prestashop/packages
command: yarn dev
lgm-prestashop:
image: prestashop/prestashop:1.7
container_name: lgm-prestashop
environment:
- PS_HANDLE_DYNAMIC_DOMAIN=1
- DB_SERVER=lgm-mysql
- PS_FOLDER_ADMIN=lagom-admin
- PS_FOLDER_INSTALL=lagom-install
ports:
- 8081:80
networks:
- lgm-prestashop
lgm-mysql:
image: mysql:8
container_name: lgm-prestashop-db
command: --default-authentication-plugin=mysql_native_password
environment:
- MYSQL_DATABASE=prestashop
- MYSQL_ROOT_PASSWORD=prestashop
# ports:
# - 3316:3306
networks:
- lgm-prestashop
lgm-phpmyadmin:
image: phpmyadmin/phpmyadmin
ports:
- 3011:3000
environment:
- PMA_HOST=lgm-mysql
- VIRTUAL_HOST=phpmyadmin.presta.local
container_name: lgm-presta_phpmyadmin
networks:
lgm-prestashop:

Nodejs sequelize RangeError using mysql

I'm using sequelize in nodejs along with mysql. I'm trying to run the command npx sequelize db:create but it giving me this error. Not sure what to do here. Any help is appreciated. Thank you.
Sequelize CLI [Node: 14.17.4, CLI: 6.2.0, ORM: 6.6.2]
Loaded configuration file "src/config.js".
Using environment "development".
internal/buffer.js:83
throw new ERR_OUT_OF_RANGE(type || 'offset', RangeError [ERR_OUT_OF_RANGE]: The value of "offset" is out of range.
It must be >= 0 and <= 5. Received 9 at boundsError (internal/buffer.js:83:9)
at Buffer.readUInt32LE (internal/buffer.js:217:5)
at Packet.readInt32 (/root/sequelize-test/node_modules/mysql2/lib/packets/packet.js:103:24)
at Function.fromPacket (/root/sequelize-> > test/node_modules/mysql2/lib/packets/handshake.js:63:32)
at ClientHandshake.handshakeInit (/root/sequelize-test/node_modules/mysql2/lib/commands/client_handshake.js:93:40)
at ClientHandshake.execute (/root/sequelize-test/node_modules/mysql2/lib/commands/command.js:39:22)
at Connection.handlePacket (/root/sequelize-test/node_modules/mysql2/lib/connection.js:425:32)
at PacketParser.onPacket (/root/sequelize-test/node_modules/mysql2/lib/connection.js:75:12)
at PacketParser.executeStart (/root/sequelize-test/node_modules/mysql2/lib/packet_parser.js:75:16) at Socket. (/root/sequelize-test/node_modules/mysql2/lib/connection.js:82:25)
at Socket.emit (events.js:400:28) at addChunk (internal/streams/readable.js:290:12)
at readableAddChunk (internal/streams/readable.js:265:9)
at Socket.Readable.push (internal/streams/readable.js:204:10)
at TCP.onStreamRead (internal/stream_base_commons.js:188:23) { code: 'ERR_OUT_OF_RANGE' }
my config of database
module.exports = {
development: {
username: 'root',
password: 'root123456',
database: 'test_run',
host: 'localhost',
port: 33060,
dialect: 'mysql',
timezone: '+06:00',
// dialectOptions: {
// bigNumberStrings: true,
// },
},
};
Make sure that you have globally installed MySQL2 using this command:
npm i -g mysql2
If the problem persists, try downgrading node version to 10.
Regards

Smart contracts won't deploy to a goQuorum blockchain

I installed goQuorum with quorum-wizard with 4 quorum nodes and 4 tessera nodes on IBFT. When I start the network with "./start.sh" I get
Starting Quorum network...
Waiting until all Tessera nodes are running...
Waiting until all Tessera nodes are running...
All Tessera nodes started
Starting Quorum nodes
Successfully started Quorum network.
--------------------------------------------------------------------------------
Tessera Node 1 public key:
BULeR8JyUWhiuuCMU/HLA0Q5pzkYT+cHII3ZKBey3Bo=
Tessera Node 2 public key:
QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc=
Tessera Node 3 public key:
1iTZde/ndBHvzhcl7V68x44Vx7pl8nwx9LqnM/AfJUg=
Tessera Node 4 public key:
oNspPPgszVUFw0qmGFfWwh1uxVUXgvBxleXORHj07g8=
--------------------------------------------------------------------------------
when I run "nmap -p 21000-21100" I get
21000/tcp open irtrans
21001/tcp open unknown
21002/tcp open unknown
21003/tcp open unknown
This is my truffle-config.js
const HDWalletProvider = require("#truffle/hdwallet-provider");
const mnemonic = "not my real mn mon ic num phrase dum get l gg";
module.exports = {
networks: {
rinkeby: {
provider: function() {
return new HDWalletProvider(mnemonic, "https://rinkeby.infura.io/v3/"+mnemonic);
},
network_id: '*',
timeoutBlocks: 100000,
networkCheckTimeout:2000000
},
quorum: {
provider: function() {
return new HDWalletProvider(mnemonic, "http://127.0.0.1:21000/", chainId=10);
},
network_id: '*',
type: 'quorum',
timeoutBlocks: 100000,
networkCheckTimeout:2000000
},
compilers: {
solc: {
version: "0.5.0",
settings: {
optimizer: {
enabled: true, // Default: false
runs: 1000 // Default: 200
},
evmVersion: "homestead" // Default: "byzantium"
}
}
}
};
When I cd in to the directory with the truffle-config.js and I run truffle deploy --reset --network quorum to compile my smart contracts I get the following error.
Compiling your contracts...
===========================
> Everything is up to date, there is nothing to compile.
/root/dapp1/Dapp/node_modules/web3-core-helpers/src/errors.js:42
return new Error(message);
^
Error: PollingBlockTracker - encountered an error while attempting to update latest block:
Error: Invalid JSON RPC response: ""
at Object.InvalidResponse (/root/dapp1/Dapp/node_modules/web3-core-helpers/src/errors.js:42:16)
at XMLHttpRequest.request.onreadystatechange (/root/dapp1/Dapp/node_modules/web3-providers-http/src/index.js:92:32)
at XMLHttpRequestEventTarget.dispatchEvent (/root/dapp1/Dapp/node_modules/xhr2-cookies/xml-http-request-event-target.ts:44:13)
at XMLHttpRequest._setReadyState (/root/dapp1/Dapp/node_modules/xhr2-cookies/xml-http-request.ts:219:8)
at XMLHttpRequest._onHttpRequestError (/root/dapp1/Dapp/node_modules/xhr2-cookies/xml-http-request.ts:379:8)
at ClientRequest.<anonymous> (/root/dapp1/Dapp/node_modules/xhr2-cookies/xml-http-request.ts:266:37)
at ClientRequest.emit (events.js:315:20)
at Socket.socketOnEnd (_http_client.js:493:9)
at Socket.emit (events.js:327:22)
at endReadableNT (internal/streams/readable.js:1327:12)
at processTicksAndRejections (internal/process/task_queues.js:80:21)
at PollingBlockTracker._performSync (/root/dapp1/Dapp/node_modules/eth-block-tracker/src/polling.js:51:24)
at processTicksAndRejections (internal/process/task_queues.js:93:5)
I can deploy the smart contracts to rinkeby test net within this environment but I can't get smart contracts to deploy to quorum
I am using
Node v14.16.1
Ubuntu 18.04 (64 Bit)
Quorum 21.1.0
Web3.js v1.3.5
Truffle v5.3.4 (core: 5.3.4)
solidity v0.5.0
I've not used truffle with HDWalletProvider, which I believe is from an older version of Truffle. I'm not certain that will set up the node connection.
We would normally set up the configuration like the following in the quorum: {} section:
host: "localhost",
port: 22001,
type: "quorum"
I would suggest using that and see if it works.
Take a look at the truffle docs for working with quorum: https://www.trufflesuite.com/docs/truffle/distributed-ledger-support/working-with-quorum
Did you change from the default ports in the quorum-wizard?
The default for the node 1 RPC is 22000, the 21000 range is for p2p networking.
I would set the host and port per Satpal's answer above, as in the hdwallet documentation here.

How to correctly use OpenTelemetry exporter with OpenTelemetry collector in client and server?

I am trying to make OpenTelemetry exporter to work with OpenTelemetry collector.
I found this OpenTelemetry collector demo.
So I copied these four config files
docker-compose.yml (In my app, I removed generators part and prometheus which I currently having issue running)
otel-agent-config.yaml
otel-collector-config.yaml
.env
to my app.
Also based on these two demos in open-telemetry/opentelemetry-js repo:
Traces in Web demo
Traces in Node - GRPC demo
I came up with my version (sorry for a bit long, really hard to set up a minimum working version due to the lack of docs):
.env
OTELCOL_IMG=otel/opentelemetry-collector-dev:latest
OTELCOL_ARGS=
docker-compose.yml
version: '3.7'
services:
# Jaeger
jaeger-all-in-one:
image: jaegertracing/all-in-one:latest
ports:
- "16686:16686"
- "14268"
- "14250"
# Zipkin
zipkin-all-in-one:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
# Collector
otel-collector:
image: ${OTELCOL_IMG}
command: ["--config=/etc/otel-collector-config.yaml", "${OTELCOL_ARGS}"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "1888:1888" # pprof extension
- "8888:8888" # Prometheus metrics exposed by the collector
- "8889:8889" # Prometheus exporter metrics
- "13133:13133" # health_check extension
- "55678" # OpenCensus receiver
- "55680:55679" # zpages extension
depends_on:
- jaeger-all-in-one
- zipkin-all-in-one
# Agent
otel-agent:
image: ${OTELCOL_IMG}
command: ["--config=/etc/otel-agent-config.yaml", "${OTELCOL_ARGS}"]
volumes:
- ./otel-agent-config.yaml:/etc/otel-agent-config.yaml
ports:
- "1777:1777" # pprof extension
- "8887:8888" # Prometheus metrics exposed by the agent
- "14268" # Jaeger receiver
- "55678" # OpenCensus receiver
- "55679:55679" # zpages extension
- "13133" # health_check
depends_on:
- otel-collector
otel-agent-config.yaml
receivers:
opencensus:
zipkin:
endpoint: :9411
jaeger:
protocols:
thrift_http:
exporters:
opencensus:
endpoint: "otel-collector:55678"
insecure: true
logging:
loglevel: debug
processors:
batch:
queued_retry:
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [opencensus, jaeger, zipkin]
processors: [batch, queued_retry]
exporters: [opencensus, logging]
metrics:
receivers: [opencensus]
processors: [batch]
exporters: [logging,opencensus]
otel-collector-config.yaml
receivers:
opencensus:
exporters:
prometheus:
endpoint: "0.0.0.0:8889"
namespace: promexample
const_labels:
label1: value1
logging:
zipkin:
endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
format: proto
jaeger:
endpoint: jaeger-all-in-one:14250
insecure: true
processors:
batch:
queued_retry:
extensions:
health_check:
pprof:
endpoint: :1888
zpages:
endpoint: :55679
service:
extensions: [pprof, zpages, health_check]
pipelines:
traces:
receivers: [opencensus]
processors: [batch, queued_retry]
exporters: [logging, zipkin, jaeger]
metrics:
receivers: [opencensus]
processors: [batch]
exporters: [logging]
After running docker-compose up -d, I can open Jaeger (http://localhost:16686) and Zipkin UI (http://localhost:9411).
And my ConsoleSpanExporter works in both web client and Express.js server.
However, I tried this OpenTelemetry exporter code in both client and server, I am still having issue to connect OpenTelemetry collector.
Please see my comment about URL inside of the code
import { CollectorTraceExporter } from '#opentelemetry/exporter-collector';
// ...
tracerProvider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
tracerProvider.addSpanProcessor(
new SimpleSpanProcessor(
new CollectorTraceExporter({
serviceName: 'my-service',
// url: 'http://localhost:55680/v1/trace', // Return error 404.
// url: 'http://localhost:55681/v1/trace', // No response, not exists.
// url: 'http://localhost:14268/v1/trace', // No response, not exists.
})
)
);
Any idea? Thanks
The demo you tried is using older configuration and opencensus which should be replaced with otlp receiver. Having said that this is a working example
https://github.com/open-telemetry/opentelemetry-js/tree/master/examples/collector-exporter-node/docker
So I'm copying the files from there:
docker-compose.yaml
version: "3"
services:
# Collector
collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/conf/collector-config.yaml", "--log-level=DEBUG"]
volumes:
- ./collector-config.yaml:/conf/collector-config.yaml
ports:
- "9464:9464"
- "55680:55680"
- "55681:55681"
depends_on:
- zipkin-all-in-one
# Zipkin
zipkin-all-in-one:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
# Prometheus
prometheus:
container_name: prometheus
image: prom/prometheus:latest
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
collector-config.yaml
receivers:
otlp:
protocols:
grpc:
http:
exporters:
zipkin:
endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
prometheus:
endpoint: "0.0.0.0:9464"
processors:
batch:
queued_retry:
service:
pipelines:
traces:
receivers: [otlp]
exporters: [zipkin]
processors: [batch, queued_retry]
metrics:
receivers: [otlp]
exporters: [prometheus]
processors: [batch, queued_retry]
prometheus.yaml
global:
scrape_interval: 15s # Default is every 1 minute.
scrape_configs:
- job_name: 'collector'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['collector:9464']
This should work fine with opentelemetry-js ver. 0.10.2
Default port for traces is 55680 and for metrics 55681
The link I posted previously - you will always find there the latest up to date working example:
https://github.com/open-telemetry/opentelemetry-js/tree/master/examples/collector-exporter-node
And for web example you can use the same docker and see all working examples here:
https://github.com/open-telemetry/opentelemetry-js/tree/master/examples/tracer-web/
Thank you sooo much for #BObecny's help! This is a complement of #BObecny's answer.
Since I am more interested in integrating with Jaeger. So here is the config to set up with all Jaeger, Zipkin, Prometheus. And now it works on both front end and back end.
First both front end and back end use same exporter code:
import { CollectorTraceExporter } from '#opentelemetry/exporter-collector';
new SimpleSpanProcessor(
new CollectorTraceExporter({
serviceName: 'my-service',
})
)
docker-compose.yaml
version: "3"
services:
# Collector
collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/conf/collector-config.yaml", "--log-level=DEBUG"]
volumes:
- ./collector-config.yaml:/conf/collector-config.yaml
ports:
- "9464:9464"
- "55680:55680"
- "55681:55681"
depends_on:
- jaeger-all-in-one
- zipkin-all-in-one
# Jaeger
jaeger-all-in-one:
image: jaegertracing/all-in-one:latest
ports:
- "16686:16686"
- "14268"
- "14250"
# Zipkin
zipkin-all-in-one:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
# Prometheus
prometheus:
container_name: prometheus
image: prom/prometheus:latest
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
collector-config.yaml
receivers:
otlp:
protocols:
grpc:
http:
exporters:
jaeger:
endpoint: jaeger-all-in-one:14250
insecure: true
zipkin:
endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
prometheus:
endpoint: "0.0.0.0:9464"
processors:
batch:
queued_retry:
service:
pipelines:
traces:
receivers: [otlp]
exporters: [zipkin]
processors: [batch, queued_retry]
metrics:
receivers: [otlp]
exporters: [prometheus]
processors: [batch, queued_retry]
prometheus.yaml
global:
scrape_interval: 15s # Default is every 1 minute.
scrape_configs:
- job_name: 'collector'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['collector:9464']

Serve asset files in nginx using Kubernetes

I'm trying to deploy a pod to kubernetes using my node app and an nginx proxy server which should also serve my asset files.
I'm using two containers inside one pod for that. Below code runs the application correctly but asset files are not being served by nginx.
Below is my front-end-deployment.yaml files which takes care of creating the deployment for me. I'm wondering why nginx by this configurations doesn't not serve the static files?
apiVersion: v1
kind: ConfigMap
metadata:
name: mc3-nginx-conf
data:
nginx.conf: |
user nginx;
worker_processes 3;
error_log /var/log/nginx/error.log;
events {
worker_connections 10240;
}
http {
log_format main
'remote_addr:$remote_addr\t'
'time_local:$time_local\t'
'method:$request_method\t'
'uri:$request_uri\t'
'host:$host\t'
'status:$status\t'
'bytes_sent:$body_bytes_sent\t'
'referer:$http_referer\t'
'useragent:$http_user_agent\t'
'forwardedfor:$http_x_forwarded_for\t'
'request_time:$request_time';
access_log /var/log/nginx/access.log main;
upstream webapp {
server 127.0.0.1:3000;
}
server {
listen 80;
root /var/www/html;
location / {
proxy_pass http://webapp;
proxy_redirect off;
}
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 3
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
volumes:
- name: nginx-proxy-config
configMap:
name: mc3-nginx-conf
- name: shared-data
emptyDir: {}
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-proxy-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- name: shared-data
mountPath: /var/www/html
- name: frontend
image: sepehraliakbari/rtlnl-frontend:latest
volumeMounts:
- name: shared-data
mountPath: /var/www/html
lifecycle:
postStart:
exec:
command: ['/bin/sh', '-c', 'cp -r /app/build/client/. /var/www/html']
ports:
- containerPort: 3000

Categories