i can't understand why my site after 3 operations goes into a literally infinite loop. i am using eclipse and i am hosting the server with tomcat, as database i use workbench. The server initially does not give errors, or so it seems for the first two operations, but then as soon as I perform the third operation the site does not respond and goes into infinite upload (I attach photos and code).
I don't think it's a code problem, maybe it will be Tomcat max cache? I've been banging my head for more than 3 days but I just can't solve it, finally I don't have any kind of error inside the Eclipse console, and the only way to get my site up and running is to totally restart the tomcat server.
This is my code: https://github.com/Fatted/Olysmart_TSW2021 (download the zip,databse is included)
I already thank those who will help me or who at least had some time to read this post, thanks!
After quick run through the code I think you're experiencing DB connection leak. Each query in your DAOs opens new connection to DB and never closes it. Try using try with resources syntax for connections.
Related
We have a server running on express js with a database MongoDB accessing with the framework mongoose. The application is in production and we are facing the issue of APIs taking too long to respond sometimes they even get timed out and it is completely random and it happens for just 10-15 requests in a day. We have an AWS loader balancer and the server is hosted on 2 instances and MongoDB is hosted on a separate instance. We have put logs on the required places in the server. Firstly I thought it was an issue of TCP/IP connection so I configured keepAliveTimeout and headersTimeout in the application(Reference) but that too didn't help.
I tried to analyze the logs and found something the API comes to the server but it doesn't start processing for 5-10 seconds and sometimes it processes everything in a second but doesn't respond back. I even tried to replicate the issue on DEV through apache JMeter but the issue isn't replicating. I am thinking this might be due to node js being single-threaded but there there are so many applications running on node js. How are they managing such scenarios? Also, we have millions of records in the database so could they create an issue? By the way we have properly indexed the fields. Please guys help me out with this issue.
I assume that in beginning you are not facing this issue, right?
and after data increase, you are facing an issue
and timeout happens because API does not give response
according to my above assumptions you may have an issue with MongoDB when MongoDB have a huge dataset, my suggestion is to check your MongoDB operations when your request is being timeout your MongoDB will in load,
to check MongoDB process
db.currentOp()
in mongo shell or any editor where you can run it
and check which process running through a long time
resolve it and watch
I have a CGI script written in C++ which performs a relatively simple loop in a brute force evaluation of a scheduling-type problem. Parameters are collected from a database, and the CGI script is called from the web browser in JavaScript using an XMLHttpRequest passing the parameters in a POST request. This works fine. But sometimes it takes too long, and I would like to have the user be able to abort the script by clicking on a button in the browser while the script is running.
I have resorted to polling a little file from inside the CGI script. The file can contain either '0' or '1', indicating that the script should abort. This works, too. However, the file I/O itself takes a lot of time relatively speaking, and I was wondering if there is more elegant way of doing this? I can only check it every 4 or 5 million iterations, otherwise I run into problems. I can live with that, but I am wondering if I could do it with an environment variable, for example?
Thanks for any tips!
CGI is inflexible, so any solution should rely on other means.
Copying strategies:
put that file on a RAM-disk - the file IO should go down
replacing FS signaling with TCP ones. Make the executor script open a socket to an "abort daemon" listening to a dedicated port. If the executor script "socket peeks" if even one byte available from the "abort daemon", it aborts. Once started, the executor script will need to just communicate the opened port to the "aborting page". Another script, which is URL-pointed by the "Abort" button will need to communicate the "abort daemon" which port to send "a killer byte".
With the extra info about the server running PHP, you may try a PHP cache as a mechanism for messaging - see APC store and related.
Other caches seems to exist - a list of others here.
Perhaps an overkill - redis - they say
Redis is an open source (BSD licensed), in-memory data structure store, used as database, cache and message broker.
Has, among a huge list of supported languages, C and PHP. Seems notable enough to have a wikipedia entry.
After some preliminary tests, it seems that the shared memory facilities offered by PHP shmop will be the easiest and safest to use in the current server environment available to me. Many thanks to Adrian Colomitchi who pointed me in the right direction (RAM disk == shared memory)!
How does Stack Overflow show the answer added/edited message without a page reload?
Yes, I know it will happen with Ajax. So I opened Firebug in a browser to check it, to see whether any requests are coming in a particular interval of time.
But I don't see any request coming in Firebug.
Can we perform a request without it showing in Firebug?
Are there any other ideas behind this or is my concept totally wrong?
It appears to be using HTML 5 Web Sockets. They basically keep an open connection between the server and the client and, among many other things, allow the client to define event handlers to treat new data when received from the server.
Here you'll find some basic examples and concepts about WebSockets: Introducing WebSockets: Bringing sockets to the web.
The WebSocket specification defines an API establishing "socket"
connections between a web browser and a server. In plain words: There
is an persistent connection between the client and the server and both
parties can start sending data at any time.
There is also a live demo with server & client source code available.
You might also find the following question useful: How do real time updates work?
To add to Xavi's answer, if you want to get started with web sockets without having to understand all the internals, you might try out Pusher, a library for multiple platforms (including PHP) that makes push notifications on the web very straightforward.
I do not work for Pusher; it's just a product I've found very useful in the past. I've always used the free version for small personal projects, though I would probably pay if I ever used it on a larger application.
So, I'm using a forever-frame to stream data from Tornado to a JavaScript client application, and I'm finding that the JavaScript client occasional just stops receiving data. I've implemented a heartbeat method, where the client will change the URL of the frame to reopen the connection when a heartbeat is missed, but this feels like an awkward hack--- and there's a certain amount of setup and teardown which has to happen in the app UI when the connection refreshes. I'd really prefer if it could be one persistent connection for the entire session of use.
Sometimes this is once every few minutes, other times it seems to get itself in a loop where it happens every five seconds. My browsers are Firefox and Chrome.
What kinds of things could cause this issue? I really just need some ideas for starting points in my debugging--- should I be looking at latency, data flooding, bad connection? Would the problem be more likely to be at the Tornado end or the JavaScript end? Alternatively, would I be better to invest my efforts in making the JavaScript app able to reinitialize itself more gracefully?
Aha. I figured this out. Tornado does poorly with concurrency. The issue was that my logic was calling the long-lived RequestHandler instances from multiple threads (triggered by inbound RPCs), and when they collided, Tornado would freak out and close the connection.
The fix was to queue up my interactions with RequestHandler instances on the IOLoop thread, using add_callback:
tornado.ioloop.IOLoop.instance().add_callback(do_stuff)
when i use comet iframe i just send script tags from backend php file to front end and javascript is displaying it.
can someone explain briefly where a comet server comes up in the picture and how the communication will be between frontend (javascript), backend (php) and the comet server.
cause i read that if you are going to let a lot of users use your comet application it's better to have a comet server. but i dont quite understand the coupling between these parts.
use this link:
http://www.zeitoun.net/articles/comet_and_php/start
That is the best tutorial i could found, and takes 1 min to try;
in short:
( image from that tutorial )
index, can be html or php, creates a request, which php doesnt answer until there is data to send back, with chat, when someone sends you a message.
If you have many users chatting, i recommend using a java chat app
otherwise your server will load up with running php engines ( each unanswered request keeps a php engine alive, which is server capacity ).
http://streamhub.blogspot.com/2009/07/tutorial-building-comet-chat.html
this should help you out with that, but you do need java hosting :)
have fun
edit:
just read the other server part; sending requests to your own server can get messed because the timeout function may not work well, so the server crashes, an independant server timeouts the connection after a certain amount of time, no matter what.
I have a very simple example here that can get you started with comet. It covers compiling Nginx with the NHPM module and includes code for simple publisher/subscriber roles in jQuery, PHP, and Bash.
http://blog.jamieisaacs.com/2010/08/27/comet-with-nginx-and-jquery/
A working example (simple chat) can be found here:
http://cheetah.jamieisaacs.com/