We have a server running on express js with a database MongoDB accessing with the framework mongoose. The application is in production and we are facing the issue of APIs taking too long to respond sometimes they even get timed out and it is completely random and it happens for just 10-15 requests in a day. We have an AWS loader balancer and the server is hosted on 2 instances and MongoDB is hosted on a separate instance. We have put logs on the required places in the server. Firstly I thought it was an issue of TCP/IP connection so I configured keepAliveTimeout and headersTimeout in the application(Reference) but that too didn't help.
I tried to analyze the logs and found something the API comes to the server but it doesn't start processing for 5-10 seconds and sometimes it processes everything in a second but doesn't respond back. I even tried to replicate the issue on DEV through apache JMeter but the issue isn't replicating. I am thinking this might be due to node js being single-threaded but there there are so many applications running on node js. How are they managing such scenarios? Also, we have millions of records in the database so could they create an issue? By the way we have properly indexed the fields. Please guys help me out with this issue.
I assume that in beginning you are not facing this issue, right?
and after data increase, you are facing an issue
and timeout happens because API does not give response
according to my above assumptions you may have an issue with MongoDB when MongoDB have a huge dataset, my suggestion is to check your MongoDB operations when your request is being timeout your MongoDB will in load,
to check MongoDB process
db.currentOp()
in mongo shell or any editor where you can run it
and check which process running through a long time
resolve it and watch
Related
i can't understand why my site after 3 operations goes into a literally infinite loop. i am using eclipse and i am hosting the server with tomcat, as database i use workbench. The server initially does not give errors, or so it seems for the first two operations, but then as soon as I perform the third operation the site does not respond and goes into infinite upload (I attach photos and code).
I don't think it's a code problem, maybe it will be Tomcat max cache? I've been banging my head for more than 3 days but I just can't solve it, finally I don't have any kind of error inside the Eclipse console, and the only way to get my site up and running is to totally restart the tomcat server.
This is my code: https://github.com/Fatted/Olysmart_TSW2021 (download the zip,databse is included)
I already thank those who will help me or who at least had some time to read this post, thanks!
After quick run through the code I think you're experiencing DB connection leak. Each query in your DAOs opens new connection to DB and never closes it. Try using try with resources syntax for connections.
I've been through a number of Node.js, Express, and other tutorials/posts, and I'm struggling with how to think about connecting to a database on various pages throughout a webapp.
I would like to run a Node.js app (with a server.js file that connects to a database) and then query that database as needed on every page throughout the app.
So if I have an inventory.html page I should be able to have javascript that queries the inventory table and displays various inventory items throughout that html page.
Problem #1. I can't find a way to use mysql on any client-side pages, since javascript can't use node's require() function client-side. As detailed in this StackOverflow post ("require is not defined").
Problem #2. I can't figure out an elegant way to pass a database connection to other pages in my app. A page can send a POST request back to the server.js file, but this really isn't as flexible as I want.
I'm really looking for the modern, preferred way to do a bunch of PHP scripting in my Node app. Can anyone guide me to the right way to do this? Thank you!
You just can't directly call mysql from the client. Even if it worked imagine that anybody could modify the SQL queries and access all your data.
The only way how to do it is this:
js client app ------> js server app -------> mysql
You just must have 2 apps: one running in the user's browser sending requests to the server and the other running on the server answering the requests.
I'm currently experimenting with WebSockets in a bid to reduce / remove the need for constant AJAX requests in a potentially low bandwidth environment. All devices are WebSocket compliant so there's no issue there, and I'm trying to keep it to native PHP WebSockets, no node.js or other frameworks / libraries (Which so far has been fine).
What I'm looking to do is to decide how to go about notifying connected clients about an update to a database by another Client. The use case in question is a person pressing a button on their device, which then alerts that persons manager(s) to that press. So the two options I have though of are as follows:
1. Looping a Database Query (PHP)
My first thought was to insert a query into the WebSocket server that is effectively saying "Has the alert field changed? If so, notify the manager(s)". Whilst this is the most straightforward and sensible approach (That I can think of), it seems wasteful to have a PHP script designed to reduce strain on the server, that is now running a query every second, however, at least this would ensure that when a Database update is detected, the update is sent.
2. Sending a notification from the Client
Another thought I had, was that when the client updates the Database, they could in fact send a WebSocket notification themself. This has the advantage of reducing any intensive and looped queries, but also means that I'd need to have a WebSocket message being sent every time I want to change any data, such as:
$.post("AttemptDatabaseUpdate.php", {Data}).function(Result) // Don't worry about the semantics of this, it's not actual code
{
if(Result == "Successful")
{
SendWebSocketNotification(OtherData);
}
}
Maybe this is the best option, as it is the most efficient, but I worry that there is a chance the connection may drop between updating the Database, and sending the WebSocket notification, which may create a need for a fallback check in the PHP file, much like the one in the first solution, albeit at a longer interval (Say every 30 seconds).
3. MySQL Trigger?
This is purely a guess, but perhaps another option is to create a MySQL trigger, which can somehow notify the server.php file directly? I've no idea how this would work, and would hazard a guess that this may end up with the same or similar Query requirements as solution #1, but it's just a though...
Thank you in advance for your help :)
EDIT: Solution possibility 4
Another thought has just popped into my head in fact, whereby the PHP file used to update the database could in fact have a WebSocket message built into it. So that when the PHP file updates the database, the WebSocket server is notified via PHP, is this possible?
If you use websockets, you should use notifications from client. That's one of their main use cases.
If you're worried about inconsistencies due to connection dropping or something changing in-between, you could implement a system similar to HTTP ETags, where client would send a hash code that you can respond on server side if there is a conflict in updating.
Update: I guess I understood your initial issue a bit wrong. If I understand your use case correctly: you are sending database updates from a client and after that all connected clients need to be updated. In that case, I think server should send the update messages after DB updates have been done, so I agree with solution 4. I am assuming here that your websocket server is the same server running PHP and doing the DB updates.
However, depending on your use case, client should still send a hash value on the next request identifying its "view of the world", so you would not be doing identical updates multiple times if a connection gets broken.
Update 2: so it was now understood that you indeed use a separate, standalone websocket server. Basically you have two different web servers on the server side and are having an issue on how to communicate between the two. This is a real issue, and I'd recommend only using one server at a time - either take a look at using Apache websocket support (experimental and not really recommended) or migrating your php scripts to the websocket instance.
Neither PHP or Apache was really build with websockets in mind. It is quite easy to set up a standalone websocket server using only PHP, but it might not be so easy then to migrate the rest of the PHP stack to it if the code is relying on Apache/web server on. Apache websocket support also is hardly optimal. For a real websocket solution, unfortunately, best practice would be using a technology that is built for it from the ground up.
The better answer is to send notification through Server side when database is updated by PHP script, so that script have to add options of web sockets to directly send notification to all web socket clients registered.
User send content->Php script process content and save data according to true condition->check database is updated by checking return of mysql_query/other alternative->if true than use web-socket and send notification to all users
now this is more easy/handy/bandwidth saver.
Why make the server push data to get notifications, like using SingleR while it can be made client side?
Using a javascript timing event, that checks for recent updates at specified time intervals user can get notifications as long as he remains connected to the server.
So my question is why do more work at the server that the client can already do?
It's not more work to the server, it's less work. Suppose that you have 10000 clients (and the number could easily be in the 100K or even millions for popular web-sites) polling the server every X seconds to find out if there's new data available for them. The server would have to handle 10000 requests every X seconds even if there's no new data to return to the clients. That's huge overhead.
When the server pushes updates to the clients, the server knows when an update is available and it can send it to just the clients this data is relevant to. This reduces the network traffic significantly.
In addition it makes the client code much simpler, but I think the server is the critical concern here.
First if you didn't use server push you will not get instant update for example you can't do chat application, second why bothering the client to do job that it is not designed to do it? third you will have performance issue on the client cause like #Ash said server is a lot more powerful than a client computer.
Wanted to get some feedback on this implementation.
I'm developing an application on the PC to send and receive data to the serial port.
Some of the data received by the application will be solicited, while other data unsolicited.
Controlling the serial port and processing messages would be handled by a Python application that would reside between the serial port and the MySQL database. This would be a threaded application with one thread handling sending/receiving using the Queue library and other threads handling logic and the database chores.
They MySQL database would contain tables for storing data received from the serial port, as well as tables of outgoing commands that need to be sent to the serial port. A command sent out may or not be received, so some means of handling retries would be required.
The webapp using HTML, PHP, and javascript would provide the UI. Users can query data and send commands to change parameters, etc. All commands sent out would be written into an outgoing table in the database and picked up by the python app.
My question: Is this a reasonable implementation? Any ideas or thoughts would be appreciated. Thanks.
It seems there's a lot of places for things to go wrong.
Why not just cut out PHP all together and use python?
e.g. Use a python web framework & let your JavaScript communicate with that and while also reading the serial port and logging to MySQL.
That's just me though. I'd try and cut out as many points where it could fail as possible and keep it super simple.
You might also want to check out pySerial (http://pyserial.sourceforge.net/). You might also want to think about you sampling rates, i.e. how much data are you going to be generating and at what frequency. in other words how much data are you planning to store. Will give you some idea of system sizing.