Is there any way i can do some database updation things whenever my node.js server crashes or stopped. Like try{}catch(){}finally(){} in JAVA. I am a bit newbie here.
Is there any events will node emit before it going shutdown. If so i can write my function there.
I have scenario,if i stop the server manually,i need to update some fields in the database.
The same is for Unhandled crashes also.
i here about domain in Node.js. But i have no idea how to monitor a whole server using domain.
An event is emitted when the node process is about to exit:
process.on('exit', function(code) {
console.log('About to exit with code:', code);
});
http://nodejs.org/api/process.html#process_event_exit
You can't query the database here though, since this handler can only perform synchronous operations. Some possible alternatives:
use database transactions so you never need to do "database updation things" when your app crashes
use a tool like Upstart to automatically restart your process, and then do database fixup stuff whenever your process starts
When you are using node JS it's bad practice to use try / catch, because of big number of asynchronous calls. The best practice here to use "promises" review next link, there you can find a good explanation: https://www.promisejs.org/
Related
I have a Meteor app that is performing some calls that are currently hanging. I'm processing a lot of items in a loop that is then upserting to server-side Mongo. (I think this is done asynchronously) I understand the upserting in a loop is not good .
This whole functionality seems to make the app hang for a while. I'm even noticing sock.js and websocket error out in the console. I think this is all due to DDP, async Mongo upserts, and the slow requests.
Here's some pseduocode to what I'm talking about
for (1..A Lot of records) {
//Is this async?
Collection.upsert(record)
}
Eventually this function will complete. However, I'll notice that Meteor "Restarts" (I think this is true because I see Accounts.onLogin being called again. It's almost like the client refreshes after the slow request has actually finished. This results in something that appears like an infinite loop.
My question is why the app is "restarting". Is this due to something in the framework and how it handles slow requests? I.e. does it queue up all bad requests and then eventually retry them automatically?
I am not sure about what exactly is going on here, but it sounds like the client isn't able to reach the server while it is "busy", and then the client connection over DDP times out, and ends up with a client refresh. The server process probably doesn't restart.
One technique for improving this is to implement a queue in your database. One piece of code detects there are a bunch of database upserts to do, so it records the information in a table which is used as a queue.
You set up a cron job (using eg npm module node-cron) that looks for things in the queue on a regular basis - when it finds an unprocessed record, it does the upsert work needed, and then either updates a status value in the queue record to 'done', or simply deletes it from the queue. You can decide how many records to process at a time to minimise interruptions.
Another approach is to do the processing in another node process on your server, basically like a worker process. If this process is busy, it is not going to impact your front end. The same queueing technique can be used to make sure this doesn't get bogged down either.
You lose a little reactivity this way, but given it's some kind of bulk process, that shouldn't matter.
I have a python websocket server attempting to communicate with a javascript websocket client (embedded in HTML). The events are being emited from the server immediately, but it takes upwards of 30 seconds for the server to send the event trigger, despite both the client and server being locally hosted.
Here is the relavent code for the server:
sio = socketio.AsyncServer(cors_allowed_origins='*')
app = web.Application() #aiohttp web server
loop = asyncio.get_event_loop()
sio.attach(app)
async def index(request):
with open('./index.html') as f:
return web.Response(text=f.read(), content_type='text/html')
app.router.add_get('/', index)
app.router.add_get('/index.html', index)
if __name__ == '__main__':
web.run_app(app)
the event is being fired like so (edit, this must be done with event loops, as emit is an asynchronous function being run from a synchronous one.):
print('Starting event')
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_until_complete(sio.emit('ChangeProgressState'))
loop.close()
print('Event has been fired.')
However, the print statements show up immediately. On the client end, I am connecting and trying to consume the event like this:
const socket = io.connect("http://localhost:8080", {
transports: ['websocket']
})
socket.on("ChangeProgressState", function (data) {
console.log("got event.")
//some code here...
});
However, from the time it takes for the event to fire, and the time it takes for the javascript socket to notice can be a very long time, from 30 seconds to sometimes a few minutes. Is there something I'm doing wrong here?
It should be noted, there are very little (2%-5%) resources being consumed (both memory and CPU), so I do not currently think that is the issue. Any help would be much appreciated.
EDIT 11/15/2019: I have tried looking at the networking tab of the application (chromium-browser on raspberry pi). It seems to show the initial socket connection, but it doesn't show anything in terms of communication between sockets, even after the event eventually fires.
EDIT 2: This definitely seems to be an issue server-side. I can send events from the JS client to the python server essentially immediately, but going in the other direction is when it takes a long time to arrive. I'm not quite sure why though.
Ah ok, so my gut said it sounds like the client is long polling. Many socket libraries first establish long-polling and then upgrade to ws connections.
After taking a look at Socket.io:
... which first establishes a long-polling connection, then tries to upgrade to better transports that are “tested” on the side, like WebSocket. ...
So I don't believe you're doing anything wrong, it's just the initialization process of establishing the WebSocket connection.
As for the python part, I'll be honest that's a tad more fuzzy to me. My first guess is that the loop code doesn't block the print statement from being executed -- but I'm more familiar with JavaScript than Python, so not completely certain on that front. My second guess is that I do know from other pub/sub libraries that the server side engine sometimes makes use of a middle layer of sorts (sometimes a cache, sometimes a queue) that helps ensure messages are sent/received, that's also a possibility.
extra tidbit: I suspect if you look at the network tab of your browser's dev tools, it'd display that behavior, some form of HTTP requests, and then eventually you'd see the socket connection. Playing around with turning your Python server/service off/on would also demonstrate the robustness of socket.io in the browser and edge cases for how it handles unstable networking when communicating with respect to various internet communication protocols.
Thank you to everyone that helped answering this question! I finally found a solution that is a bit unorthodox, so I'll explain the whole situation here.
Essentially, in order to run an async method in a synchronous context, you must use the asyncio's run_until_complete method on an event loop. This is how I was doing it when this question was asked. However, after talking to the creator of the python-socketio library, it seems that you must run this in the same event loop as the one the server is running in.
However, this creates a different problem. If an event loop is already running, python does not allow you to use run_until_complete on it, giving you an error: RuntimeError: This event loop is already running.
So, this things sound contradictory right? And you would be correct. However, this problem is prevalent enough that another library exists for the sole purpose of monkey-patching the python asyncio library to fix this problem. I found this library here .
After installing and utilizing that library, I can now do this, which fixes my problem completely:
main_event_loop = asyncio.get_event_loop()
main_event_loop.run_until_complete(sio.emit("ChangeProgressState"))
Now the program runs as expected, and the messages are being sent/arriving immediately.
I'm researching about php7 and node.js to decide which one is better suited for my tasks. I read about node.js needs a server restart when a error gets thrown.
So lets say I use many libraries in my website, so a error is plausible.
I read in node.js I can store data in variables instead of in a database and use that data from the variables in the next call. Correct me if I'm wrong I never used node.js so far.
Now error gets thrown and cause of this server needs to be restarted.
Then I read there are tools that do that they restart the server eg. the tool called "forever". But now my questions -->
Does the next instance of my server can maintain the state of the old instance or does the data in the variables get lost?
Or do i have to pass this data via some tools like "forever" in the constructor or something of the next instance of the server? I guess this would be spaghetti code.
And if a error gets thrown in cause of wrong requests and there other requests still processing and the server shuts down cause of the error, will all requests time out or return something?
Thank you very much for making stuff clear for me
I read in node.js I can store data in variables instead of in a database and use that data from the varaibles in the next call. Correct me if I'm wrong I never used node.js so far.
You are wrong. Though you can store data in variables and reuse them, node doesn't work the way you are thinking.
Does the next instance of my server can maintain the state of the old instance or does the data in the variables get lost?
It gets lost
Or do i have to pass this data via some tools like "forever" in the constrctor or something of the next instance of the server? I guess this would be spaghetti code.
You need a datastore, a database like mysql or redis for example
And if a error gets thrown in cause of wrong requests and there other requests still processing and the server shuts down cause of the error, will all requests time out or return something?
They will be killed.
You have to add error handling like in every other program you're writing. A properly written program should shutdown very rarely to never, because you catch all your errors
Is there a way to do a synchronous read of a TCP socket in node.js?
I'm well aware of how to do it asynchronously by adding a callback to the socket's 'data' event:
socket.on('data', function(data) {
// now we have the string data to do whatever with
});
I'm also aware that trying to block with a function call instead of registering callbacks goes against node's design, but we are trying to update an old node module that acts as a client for my university while maintaining backwards compatibility. So we currently have:
var someData = ourModule.getData();
Where getData() previously had a bunch of logic behind it, but now we just want to send to the server "run getData()" and wait for the result. That way all logic is server side, and not duplicated client and server side. This module already maintains a TCP connection to the server so we are just piggybacking on that.
Here are the solutions I've tried:
Find a blocking read function for the socket hidden somewhere similar to python's socket library within node's net module.
string_from_tcp = socket.recv(1024)
The problem here is that it doesn't seem to exist (unsurprisingly because it goes against node's ideology).
This syncnet module adds what I need, but has no Windows support; so I'd have to add that.
Find a function that allow's node to unblock the event loop, then return back, such that this works:
var theData = null;
clientSocket.on('data', function(data) {
theData = data;
});
clientSocket.write("we want some data");
while(theData === null) {
someNodeFunctionThatUnblocksEventLoopThenReturnsHere(); // in this function node can check the tcp socket and call the above 'data' callback, thus changing the value of theData
}
// now theData should be something!
Obvious problem here is that I don't think such a thing exists.
Use ECMAScript 6 generator functions:
var stringFromTcp = yield socketRead(1024);
The problem here is that we'd be forcing students to update their JavaScript clients to this new syntax and understanding ES6 is outside the scopes of the courses that use this.
Use node-gyp and add to our node module an interface to a C++ TCP library that does support synchronous reads such as boost's asio. This would probably work but getting the node module to compile with boost cross platform has been a huge pain. So I've come to Stack Overflow to make sure I'm not over-complicating this problem.
In the simplest terms I'm just trying to create a command line JavaScript program that supports synchronous tcp reads.
So any other ideas? And sorry in advance if this seems blasphemous in context of a node project, and thanks for any input.
I ended up going with option 5. I found a small, fast, and easy to build TCP library in C++ (netLink) and wrote a node module wrapper for it, aptly titled netlinkwrapper.
The module builds on Windows and Linux, but as it is a C++ addon you'll need node-gyp configured to build it.
I hope no one else has to screw with Node.js as I did using this module, but if you must block the event loop with TCP calls this is probably your only bet.
I have a radio station at Tunein.com. In order to update album art and artist information, I need to send the following
# Update the song now playing on a station
GET http://air.radiotime.com/Playing.ashx?partnerId=<id>&partnerKey=<key>&id=<stationid>&title=Bad+Romance&artist=Lady+Gaga
The only way I can think to do this would be by setting up a PHP/JS page that updates the &title and &artist part of the URL and sends it off if there is a change. But I'd have to execute it every second, or at least every few seconds, using cron.
Are there any other more efficient ways this could be done?
Thank you for your help.
None of the code in this answer was tested. Use at your own risk.
Since you do not control the third-party API and the API is not capable of pushing information to you when it's available (an ideal situation), your only option is to poll the API at some interval to look for changes and to make updates as necessary. (Be sure the API provider is okay with such an approach as it might violate terms of use designed to prevent system abuse.)
You need some sort of long-running process that will execute at a given interval.
You mentioned cron calling a PHP script which is one option (here cron is the long-running process). Cron is very stable and would be a good choice. I believe though that cron has a minimum interval of 1 minute. I'm sure there are similar tools out there, but those might require you to have full control over your server.
You could also make a PHP script the long-running process with something like this:
while(true){
doUpdates(); # Call the API, make updates, etc
sleep(5); # Wait 5 seconds
}
If you do go down the PHP route, error handling of some sort will be a must:
while(true){
try{
doUpdates();
} catch (Exception $e) {
# manage the error
}
sleep(5);
}
Personal Advice
Using PHP as a daemon is possible but it is not as well tested as the typical use of PHP. If this task was given to me, I'd write a server/application in JavaScript using Node.js. I would prefer Node because it is designed to work as a long running process and intervals/events are a key part of JavaScript and I would be more confident in that working well than PHP for this specific task.