I have a express.js API that i recently had to tweak for some frankly dumb changes that affected core elements of it.
Now I have the problem that sometimes my server doesn't respond and after changing ~ 15 API paths i can no longer backtrack and redo it without losing way too much time. I suspect the server somehow is stuck in a endless loop and is just busy looping trough infinity and not responding to anything else.
Is there any good way to debug this kind of bug? Can i e.g. log the line-number i am running every second?
I used the Visual Studio debugger to be able to see the call stack and current line upon pausing the process.
You can debug expressjs by setting the environment variable DEBUG
http://expressjs.com/en/guide/debugging.html
If you want to show all protocols express uses internally set the environment variable DEBUG to express:* when you start your application
$ DEBUG=express:* node index.js
or node-inspector (https://github.com/node-inspector/node-inspector), see this Node-inspector with Express 4
Related
my goal is to keep an app running on my Synology, i can run the app connecting in SSH and everything is fine but when i close the connection obviously the app stop to work.
I'm not very expert with Unix and co. so my actual workaround for this is that:
Guacamole Docker + Firefox Docker: I connect to the firefox docker, then use guacamole for the ssh and run the app from there. I can close everything and the app is still running fine. But I know I'm wasting so much everything for an easy task.
I know for sure that there is a better way to accomplish that so can anyone help me? Thanks in advance!
Usually, when you start something in a shell, that something is bound to that shell, so when you terminate the shell (like when you close the ssh connection) whatever was bound to it also terminates.
For your process to survive the shell termination, you have to detach it, there are various ways to do it and I'm not experienced in the specific unix distribution that Synology uses so I can't be 100% sure but this will probably work: https://askubuntu.com/questions/8653/how-to-keep-processes-running-after-ending-ssh-session
It uses a software, tmux, to do the detatching thing.
That said, using docker is not a bad idea. I don't think there will be any noticeable performance difference or wasted resources.
I'm hoping to adopt k6 for load testing, but I'm having trouble developing scripts for it. My primary use case is to check at each request to see if I'm receiving the correct headers and content and would like to inspect the response with a debugger.
I tried to run the script on its own by attaching the node inspect debugger (https://nodejs.org/api/debugger.html) but the file doesn't get executed because the import and export module keywords are unrecognized by this current version of node (8.7.0)
I'm also unable to find any documentation on how to debug these scripts.
There is no debugger support (currently known) for k6 scripting. It's manual debugging at this time.
k6 runs javascript (ECMA6) and has an API documented at http://k6.io
Sidenote: k6 is not node and will not work with node tooling.
I recently opened an issue about this - the need for a "debug" mode where detailed information about requests is printed to stdout.
https://github.com/loadimpact/k6/issues/331
To be clear, this issue is not about creating a "real" debugger, like gdb or similar, where you can step through script code, but rather a special mode of operation where lots of HTTP request information is being output to stdout in real time, to facilitate understanding exactly what is happening between client and server when your script code is executed.
I will probably try to implement something like this as soon as Emily (the maintainer) merges in some major CLI changes she is working on right now.
I'm trying to follow these instructions for debugging android javascript.
I am aware of How can I debug javascript on Android?, but it's not clear to me how (or if) I can hit breakpoints - either using Chrome on the Android device, or the Android browser.
I can see and 'inspect' the device OK:
But breakpoints don't get hit, nor can I see line numbers on the errors in the console:
Between these two problems, I'm not getting much useful information from the debugging experience! I have tried going to 'about:debug' in the android browser, and do see the debug options appear.
I will add that the js I am debugging works fine in the latest Chrome on the same Android device.
First off, it seems like there are a bunch of syntax errors that may be preventing mustache.js from executing at all - see if you can take care of those first.
I'd try setting a breakpoint on the next line down - line #9 - to see if anything in that IIFE is running at all.
Assuming you are using a module bundler (such as Webpack) in development (based on port 8080 in your screenshot), most likely the code you're trying to debug is executed via eval. In which case by the time you can see it in the devtools, it has already run.
You can either use the debugger statement in your code, or running in production mode - where there's a real script file being executed. In both cases, you should attach the remote debugger first, and only then navigate to your page (or refresh it).
I have now tried the same thing again, and this time didn't experience the problem. Unfortunately I can't put my finger on what the problem was exactly, as due to my dev machine dying I am running a new windows 10 installation, and potentially a different version of the Android SDK and ADB. The phone and android browser haven't changed.
Anyway, I can now set and hit breakpoints as I'd expect:
I also get better error descriptions and line numbers:
FWIW, the only problem that needed fixing was changing some 'let' declarations to 'var'.
I am having a problem when running multiple instances of PhantomJS on Ubuntu 14. After a few minutes, the processes become unresponsive.
Brief background:
Using PhantomJS 2.0 to render a web page that ultimately gets saved as a PDF using Wkhtmtopdf.
PhantomJS is only responsible for loading the page, making ajax requests, and waiting for a response after the PDF is saved on the server. It does not generate the PDF itself.
There are hundreds of web pages that need to be generated into PDF, so I want to run as many PhantomJS instances in parallel as the system allows.
Each PhantomJS process is started by a shell script like so:
{path to phantomjs} {path to js file} --data {some argument} >> {path to log file} 2>&1 &
The problem occurs after a couple of minutes where I stop getting any output from the PhantomJS processes, and looking at top I can see they are just laying there not doing anything. The JS script has timers that retry to load a page if it takes longer than a minute, and eventually call phantom.exit() if the page can't load / PDF generation fails. So even if something goes wrong, the process should still exit - but it doesn't.
I tried changing the number of PhantomJS instances running in parallel. Tried 20 -> 10 -> 5 -> 3, but it doesn't seem to matter. I can actually get many more jobs execute successfully when maintaining 20 instances at a time.
When running with --debug=true I can see that at some point it gets stuck at
[DEBUG] WebPage - updateLoadingProgress:
Also going through the output I see several of these warnings:
[WARNING] QIODevice::write: device not open
which makes me believe that is the source of the problem.
I thought there might be some contention for file resources so I tried without redirecting output to a log file, and not using --local-storage-path, but that didn't help.
As a side note, I have been using PhantomJS for several years now doing the same procedure, only sequentially (run a single PhantomJS process at a time). And although there were a few snags to overcome, it worked great.
Any idea what's causing this?
Anyone faced with a similar problem?
Any advice on running multiple PhantomJS instances in parallel?
Thanks!
I faced the exact same issue both locally and on my CI server (which was also Ubuntu). Uninstalling 2.0.0 and upgrading to to 2.1.1 resolved the problem for me.
I was facing the same issue. Use driver.quit() instead of driver.close(). That solved the issue for me.
I am having a pretty specific problem but I hope people can point me in the right direction for how to debug or even fix it. I am trying to write a local client which can run and test a webpage I had built which uses SocketIO.
I am running Phantom with the command line option --web-security=false since otherwise no in or out connections are legal as my local tester is not considered part of the same origin as my website I am testing (had to fix that before the listening would work).
Using PhantomJS I can't get the emit function from SocketIO to work. It just fails silently without any error. I know the socket is validly connected because it can listen to incoming events just fine (so the on() method works). I can run the same emitting code in a web browser and it works just fine.
Does anyone know alternatives to emit(), what lower level things emit() invokes that maybe I could patch, or how I should test things next? Any help is appreciated.
After a lot of research it looks like this form of sockets simply aren't yet supported in phantomjs. When the new 2.0 releases they are supposed to be but until then other options are better. I tried to find a shim for a while but was unsuccessful.
In the end, I instead used node.js to run the main script, make the socket connections and then used the phantomjs node module to do the browser interactions rather than running the script as a pure phantom script. This meant that the api interaction logic got pushed to the node application and the phantom code was just about interacting with the page but I was able to achieve the testing goal this way so I count it as success.