I am having a pretty specific problem but I hope people can point me in the right direction for how to debug or even fix it. I am trying to write a local client which can run and test a webpage I had built which uses SocketIO.
I am running Phantom with the command line option --web-security=false since otherwise no in or out connections are legal as my local tester is not considered part of the same origin as my website I am testing (had to fix that before the listening would work).
Using PhantomJS I can't get the emit function from SocketIO to work. It just fails silently without any error. I know the socket is validly connected because it can listen to incoming events just fine (so the on() method works). I can run the same emitting code in a web browser and it works just fine.
Does anyone know alternatives to emit(), what lower level things emit() invokes that maybe I could patch, or how I should test things next? Any help is appreciated.
After a lot of research it looks like this form of sockets simply aren't yet supported in phantomjs. When the new 2.0 releases they are supposed to be but until then other options are better. I tried to find a shim for a while but was unsuccessful.
In the end, I instead used node.js to run the main script, make the socket connections and then used the phantomjs node module to do the browser interactions rather than running the script as a pure phantom script. This meant that the api interaction logic got pushed to the node application and the phantom code was just about interacting with the page but I was able to achieve the testing goal this way so I count it as success.
Related
my goal is to keep an app running on my Synology, i can run the app connecting in SSH and everything is fine but when i close the connection obviously the app stop to work.
I'm not very expert with Unix and co. so my actual workaround for this is that:
Guacamole Docker + Firefox Docker: I connect to the firefox docker, then use guacamole for the ssh and run the app from there. I can close everything and the app is still running fine. But I know I'm wasting so much everything for an easy task.
I know for sure that there is a better way to accomplish that so can anyone help me? Thanks in advance!
Usually, when you start something in a shell, that something is bound to that shell, so when you terminate the shell (like when you close the ssh connection) whatever was bound to it also terminates.
For your process to survive the shell termination, you have to detach it, there are various ways to do it and I'm not experienced in the specific unix distribution that Synology uses so I can't be 100% sure but this will probably work: https://askubuntu.com/questions/8653/how-to-keep-processes-running-after-ending-ssh-session
It uses a software, tmux, to do the detatching thing.
That said, using docker is not a bad idea. I don't think there will be any noticeable performance difference or wasted resources.
First thanks to anyone who can help with this it's greatly appreciated.
As you can see by the title this is a very weird occurrence. I do a lot of work with testcafe and can't really explain this.
The scenario is at my company we raise instances in AWS put our product on the instance then run the automation. These instances are automatically torn down in around 3 hours so I can't really post an instance example as it will tear down. When I try to go to the instance I get stuck with a spinner at the login page. I tried firefox, chrome, chromium, safari, incognito, tor, etc... They all get stuck at this spinner, in fact, this happens to everyone in the company.
For some reason when I run some tests via testcafe on my computer using chrome on this instance it gets past this spinner, logs in, then just resumes tests like nothing is wrong. I have tried using localhost as the host, different ports, skip js errors, and other flags. I am updated to the latest version of testcafe. My theory is that it has something to do with the proxy server that testcafe launches (just a guess). I tried online proxy servers and even made a local proxy server but still, none can get past this spinner.
I'm pretty sure more info would be needed to help out on this I'm just not sure what to add. If any tips or logs to add please let me know.
UPDATE:
I tried a few more online proxy site and found one that worked (performed in the same behavior as testcafe). I believe at this point i can prove that its related to the proxy server. Now with that proved, im assuming there is no way to get around this issue right (meaning have testcafe fail)?
I have opened with testcafe and they have reported its a bug: https://github.com/DevExpress/testcafe/issues/6055
I'm hoping to adopt k6 for load testing, but I'm having trouble developing scripts for it. My primary use case is to check at each request to see if I'm receiving the correct headers and content and would like to inspect the response with a debugger.
I tried to run the script on its own by attaching the node inspect debugger (https://nodejs.org/api/debugger.html) but the file doesn't get executed because the import and export module keywords are unrecognized by this current version of node (8.7.0)
I'm also unable to find any documentation on how to debug these scripts.
There is no debugger support (currently known) for k6 scripting. It's manual debugging at this time.
k6 runs javascript (ECMA6) and has an API documented at http://k6.io
Sidenote: k6 is not node and will not work with node tooling.
I recently opened an issue about this - the need for a "debug" mode where detailed information about requests is printed to stdout.
https://github.com/loadimpact/k6/issues/331
To be clear, this issue is not about creating a "real" debugger, like gdb or similar, where you can step through script code, but rather a special mode of operation where lots of HTTP request information is being output to stdout in real time, to facilitate understanding exactly what is happening between client and server when your script code is executed.
I will probably try to implement something like this as soon as Emily (the maintainer) merges in some major CLI changes she is working on right now.
My problem is:
I have being developing a Python script that connects to an URL, and using the selenium driver I manage to inject a Javascript file, after this file executes the currently page is redirected. This's all done using selenium to handle Firefox:
driver = webdriver.Firefox();
, but when I try to use PhantomJS as the browser, since it doesn't have any graphical interface:
driver = webdriver.PhantomJS();
I can't handle the response properly. Still haven't found out if the driver is not injecting the script correctly or if it's an response handling problem. If someone has any ideas it'll be great to hear.
I posted this on another question, but I think this will help:
After dealing with this same dilemma myself, I can wholeheartedly recommend using your preferred Selenium webkit (mine is Chrome) in conjunction with XVFB.
XVFB allows you to heedlessly run a browser like Firefox, Chrome, etc. which basically eradicates all of the bugginess that inherently comes with using PhantomJS. While it’s definitely an awesome piece, it’s inner workings tend to have different interactions at times (I ran into issues for instance with not being able to TAB from one element to another like one can in any browser). If you are using Jenkins, there is an incredibly awesome Plugin which literally takes one click of a button. Otherwise, I’d definitely recommend checking this out.
Phantom is a real pain in the ass, so it's definitely worth circumventing it :)
Hope this helps!
I'm not talking about browser exploits. I'm talking about real applications used in real companies, like Ijji and Nexon.
Basically, from their websites you can click a "Start Game" button, which will launch an executable located at c:\ijji\english or c\nexon[gamename] respectively. These applications are real desktop applications, meaning that they can take advantage of the filesystem, direct3d, and OS [in the form of executing other applications]. The applications can also be launched through command line [as opposed to going to the game host's website].
I figured this would be possible if the application created an ActiveX object to call for the creation of a new process. However, the websites are able to launch applications from multiple browsers other than Internet Explorer, including chrome, which, to my knowledge, does not implement ActiveX.
Obviously the people developing these applications use their own means to do this.
From looking at the services list as well as currently running applications list, I have no indication that they're running something like "gameLaunchingServer.exe" which listens to some obscure port for an incoming connection [to be accessed using iframe - HTTP Protocol] and responds by launching an application...
I'm stumped, and this is sort of stuck in my mind. Obviously, they're not using some random browser exploit, otherwise people at http://www.[insertMaliciousWebsiteHere].com would have jumped on the opportunity already to install random crap. Regardless, it seems pretty cool, and I wanted to know how it worked.
Just curious, hehe.
I believe what they're doing is setting up their own protocol handler on install - when a browser is asked to access an address with a protocol that it doesn't know how to handle (for instance, a steam:// address), it looks at all the installed protocol handlers to find a match.
So you can register your application as a myApplication:// protocol handler, and then your web page can link to a myApplication:// address and launch your application.
I didn't quite find the button you are talking about, but I'm thinking it works only after you installed the application once, isn't it?
In that case, the application probably created its own protocol, just as skype, msn and a bunch of clients.
Having a protocol is the easiest way (and very easy indeed to implement - a simple registry key).
Another way which is used is an extension or plugin.
I thought they were run through plug-ins or like applets.
For example, MS SilverLight