Data traffic is a major cost driver in a MongoDB Atlas cluster. Being able to break down traffic by collection name or even query type could help to identify inefficient queries and operations that cause high data traffic as a first step to optimize them.
According to MongoDB Atlas support (as of March 2022) there is currently no native way to break down data traffic by anything other than a cluster node. So the solution can only be somewhere on the client side, in my specific case the server that uses the MongoDB Node.js driver on make requests to the MongoDB Atlas cluster.
The obvious approach would be to build a layer on top of the MongoDB Node.js driver to log the data sizes for all sent and received request data and calculate the metrics from that.
However, the server that is making the requests runs in a Node.js environment on a EC2 instance in AWS, so I wonder whether there is a more elegant approach using AWS CloudWatch? The challenge is that the requests need to be broken down per MongoDB collection and MongoDB query type, which is information that is encoded in the request body, not the URL.
Related
Azure docs are a little bit confusing, docs says:
Do I need to make any changes to my client application to use clustering?
When clustering is enabled, only database 0 is available. If your client application uses multiple databases and it tries to read or write to a database other than 0, the following exception is thrown. Unhandled Exception: StackExchange.Redis.RedisConnectionException: ProtocolFailure on GET ---> StackExchange.Redis.RedisCommandException: Multiple databases are not supported on this server; cannot switch to database: 6
If you are using StackExchange.Redis, you must use 1.0.481 or later. You connect to the cache using the same endpoints, ports, and keys that you use when connecting to a cache that does not have clustering enabled. The only difference is that all reads and writes must be done to database 0.
And says:
Do all Redis clients support clustering?
Not all clients support Redis clustering! Please check the documentation for the library you are using, to verify you are using a library and version which support clustering. StackExchange.Redis is one library that does support clustering, in its newer versions. For more information on other clients, see the Playing with the cluster section of the Redis cluster tutorial.
The Redis clustering protocol requires each client to connect to each shard directly in clustering mode, and also defines new error responses such as 'MOVED' na 'CROSSSLOTS'. (???????) Attempting to use a client that doesn't support clustering with a cluster mode cache can result in a lot of MOVED redirection exceptions, or just break your application, if you are doing cross-slot multi-key requests.
I'm using NodeJS ioredis, but i don't know if i have to use cluster mode or not.
According to the documentation you referenced, you need a client that supports error responses for multi-key requests like MGET in a clustered setup. These errors happen when keys stored in different slots are accessed via multi-key requests.
A redis client that supports clustering will also be able to redirect you to the right node, when accessing keys.
https://redis.io/topics/cluster-tutorial#playing-with-the-cluster
The redis-cli cluster support is very basic so it always uses the fact
that Redis Cluster nodes are able to redirect a client to the right
node. A serious client is able to do better than that, and cache the
map between hash slots and nodes addresses, to directly use the right
connection to the right node. The map is refreshed only when something
changed in the cluster configuration, for example after a failover or
after the system administrator changed the cluster layout by adding or
removing nodes.
I have a Node.JS backend running on Heroku which pulls data from a Google sheet. This app will run once a day to pull updated data from the Google Sheet.
I also have a client written in HTML, CSS & JS which will need to draw that data from the backend.
The problem is, the client runs on a different server than than the Node.JS backend. This means I have to have the Node.JS backend update some form of database, then have the client download that data.
Some important information:
I don't have access to the client server, but of course I can access the backend.
I only need to transfer a very small amount of data (only 4 pieces of data).
I am doing this as a volunteer project, and therefore anything suggested needs to be free.
These are the options I have considered:
Option 1: Use Heroku Postgres
This is the option I initially wanted to use. However, I learnt that the credentials to access the database change every so often, so that means the final product may not be completely hands-free.
Option 2: Find an external SQL database host
This is the more likely option of the two. However, I've found that many free database host are insecure and difficult to use. I had a look at 000webhost, but I quickly learnt the database was hosted on localhost - this meant it couldn't be accessed by my client.
Which of these options, if any, are the best? What other methods can I use to accomplish this? Could someone please give me some recommendations on services which I could use?
Background:
I am building a reactJS application using AWS cognito, dynamo and S3. The application is based in the recruitment sector where employers and employees can post and view jobs respectively. When an employee applies for a job the employer can view the employees profile and decided whether or not to message them. Employees and employers converse via an on-site messaging service.
The Question:
What is the best method to facilitate user chat?
i.e. what is a nice & efficient way to store messages and notify users when they have a new message.
Our current approach is to have a setTimeout() on the site and check for new messages but this will be very inefficient so i'm looking for some guidance.
I would like to stay inside the amazon infrastructure as much as possible but I am open to all suggestions.
I'm currently building something similar for a startup I'm working at. Our React app is served by node.js server, while the API backend is provided by a django API with drf. As in your user chat case, we need to handle some real time data arriving in the frontend.
Our approach
The solution may be split up into inter server and server-browser realtime communication:
We use redis (aws elasticache to be exact) as a publish/ subscribe message queue to push incoming data from the API backend to the nodejs server. Specifically, whenever an instance of the model in question is created due to an HTTP POST call (i.e. in your case a message, which is send to the server), we publish JSON serialized information on a channel specific to the actors of concern.
On the node.js servers, we subscribe to channels of interest and receive information from the backend in real-time. We then use socket.io to provide a websocket connection to the frontend, which may be easily integrated with React.
Limitations of this approach
You cannot simply server your React app as a static website from S3 and have to rely on a node x React approach. react-boilerplat(by Max Stoiber I think) is a great way to start.
What's more, you can also use websockets end to end. We use this approach as our data source isn't a browser but a constrained device.
Hope that helps!
I was wondering how would I communicate with another NodeJS instance? For instance I have one NodeJS instance that is a chat room, how would I get the chat and all people connected to the chat from another NodeJS instance.
Secondly I was also wondering is it possible to manage Minecraft servers using NodeJS so for example create a directory copy all the necessary files then start the server with x amount of ram and be able to receive output of the server and send console commands.
You have many options, Socket.io, Rest, Soap, TCP/IP or even low-level protocols. Really depends on what is supported by Chat node. If you are the owner of both nodes, then I would suggest to use Socket.io, it is more real time and supports push-based communication, otherwise you will have to periodically hit REST or SOAP API of remote node.
As far as NodeJS is concerned it can execute shell commands for example. From there it is upto Minecraft what it offers and adheres to.
Wanted to get some feedback on this implementation.
I'm developing an application on the PC to send and receive data to the serial port.
Some of the data received by the application will be solicited, while other data unsolicited.
Controlling the serial port and processing messages would be handled by a Python application that would reside between the serial port and the MySQL database. This would be a threaded application with one thread handling sending/receiving using the Queue library and other threads handling logic and the database chores.
They MySQL database would contain tables for storing data received from the serial port, as well as tables of outgoing commands that need to be sent to the serial port. A command sent out may or not be received, so some means of handling retries would be required.
The webapp using HTML, PHP, and javascript would provide the UI. Users can query data and send commands to change parameters, etc. All commands sent out would be written into an outgoing table in the database and picked up by the python app.
My question: Is this a reasonable implementation? Any ideas or thoughts would be appreciated. Thanks.
It seems there's a lot of places for things to go wrong.
Why not just cut out PHP all together and use python?
e.g. Use a python web framework & let your JavaScript communicate with that and while also reading the serial port and logging to MySQL.
That's just me though. I'd try and cut out as many points where it could fail as possible and keep it super simple.
You might also want to check out pySerial (http://pyserial.sourceforge.net/). You might also want to think about you sampling rates, i.e. how much data are you going to be generating and at what frequency. in other words how much data are you planning to store. Will give you some idea of system sizing.