Considering that PhantomJS isn't exactly node.js (so modules like deathbycaptcha2) are out as they use native requests, is it possible to simply open another instance of webpage and use it to send POST requests to the captcha API without affecting the other page instance?
Will this new page.open() retain cookies collected by the first page?
Will this new page.open() retain cookies collected by the first page?
Yes, there exists only one CookieJar for each PhantomJS process. So every page that you create shares the same cookies. Think of those page instances as windows or tabs in a conventional browser.
[I]s it possible to simply open another instance of webpage and use it to send POST requests to the captcha API without affecting the other page instance?
That's not so easy, since the cookies are shared. If you don't access the same pages you can safely create a second instance. If you want to access the same page in the second instance then you can spin up a second PhantomJS process through the child_process module (e.g. with execFile).
Considering that PhantomJS isn't exactly node.js [...]
True, but there are several bridges between PhantomJS and node.js such as phantom, node-phantom, nightmare, etc. You can use them to interface with PhantomJS and additionally call node modules that you want.
Related
I have a C++ project for windows, using MiniBlink as embedded browser. (MiniBlink is a smaller Blink, which is close to chromium). I use this embedded browser to show responsive and nice looking dialogs with Quasar.js (wrapper for vue.js).
Problem:
Mostly a browser is just the passive backend. In my case, both the backend (project with embedded browser) and the frontend (dialog) are active and thus I need some communication. At the moment I use a local server to catch HTTP send from the frontend to the backend.
But is there a way to communicate from the backend to the frontend? At the moment I could only think about catching cookies or using a permanent loop in JS to send http queries to check for a possible response.
And is there no other way to send information to a backend? Everything is local, I dont need nor really want to send it into the network.
Thanks!
Idea 1: Use a local temp file to save on one side and read on other (can be also used both way)
Idea 2 (similar to question author solution): Local server with both side communication (GET/POST request into one side, text/json other way around)
Idea 3: Use launch parameter to pass though data directly into links for example: instead of using browserprocess.exe file.html, use browserprocess.exe file.html#showsomething
There are also other ways which like catching for example: checking window title of process with certain binary name from running tasks by other side; we didin't get good enough info about your background becouse you coud either use it in same process or other process, if thats same process you coud also just directly use variables both ways directly in code of miniblink and do action when they meet if statement
As CertainPerformance added as a comment, WebSockets might be the best way to go.
If one does not like to implement a websocket server, because a http server is already running, long polling requests might be the best workaround to simulate this behaviour.
Long polling: The client sends a request, which stays open as long as possible. If the server needs to communicate, it can use the open request to send its own "request" via response. It is a bit hacky, but essentially the idea behind websockets.
Mozilla has a nice article to help with websockets:
https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API/Writing_WebSocket_servers
If you (like me) use vuejs or quasar, you might want have a look at vue-native-websocket.
https://github.com/nathantsoi/vue-native-websocket
Good luck
Wondering if it's possible to run a client hosted html file (with scripts etc), that can access the computers serial ports.
My requirement is a portable 'applet' used to configure a serially connected device using hex data sentences - without a web server somewhere else.
i.e. send a serial request, data block is received. User manipulates it locally, then pushes it back to thecserial port with a different prefix.
Any thoughts? thanks
This is not possible: no browser-based JavaScript engine (what you'd need to execute scripts in a HTML file) have bindings for serial port. Using hybrid frameworks such as Cordova allows you to make JavaScript bindings to native code; you would need to have a native implementation for each individual platform for Cordova to bind. There may be existing Cordova plugins to do this (depending on exactly what you need); for example, there's multiple Cordova-to-Arduino bridges.
I'm trying to write a simple app to watch over files changes and automatically reload the updated code in the browser. I'm aware of the existance of livereload nodeamon and others, i just wanted to write my own. I've created the server, let it read the file i want to read, called the watcher that kills and restart the server when changes happen in the watched file. Last part: it should refresh the browser. how is this possible?
As others have explained, the browser programming environment and thus window.location.reload() is completely separate from node.js so you cannot call that directly from node.js. Server-side code runs on the server in node.js. Client-side code in the browser runs in the browser only. The two may communicate via Ajax requests or messages sent over a webSocket connection, but they can't call code in each other directly.
For a browser to refresh based on something that changes on the server, there are two basic approaches.
Javascript in the browser can regularly ask the server if it has anything new (usually with an Ajax call that passes a timestamp). If the server says there is something new since that timestamp, then the Javascript in the browser can request the new data/file. This "polling" approach is not particularly efficient because if the data doesn't change very often, then most of the requests for something new will just return that there is nothing new. It's also not friendly for battery life.
The browser can make a lasting webSocket connection to the server. Then, when the server finds that something has changed on the server, it can just directly send a notification to each connected browser informing it that there is something new. All connected clients that receive this message can then make a normal Ajax call to retrieve the new data. Or, the new info can just be directly sent to the client over the webSocket. In either case, this direct notification is generally more efficient than the "polling" solution.
With the webSocket option, you will NOT want your server to restart because that will drop all webSocket connections. I'm not sure why you're currently restarting the server when something changes, but you will have to change that to use the webSocket solution.
NodeJS does not have a window variable that represents the global namespace. Instead it has a variable called global.
There is no browser, window or URL location in NodeJS.
It's a purely server side tool.
In node.js you have process which is a node.js global object like window which is a browser global object.
For more info type process into the node.js shell.
Say I have two tabs, each with a web-page loaded on a different domain. The pages in the two tabs want to communicate.
The simplest solution I could see was this one (my answer on a closely-related question I found while searching for duplicates), where one or both of the pages load an intermediate page iFrame, which proxies between postMessage() and localStorage events. However, this does require this page to be hosted somewhere, and an extra request by the client.
Are there any techniques for this that wouldn't require a specialised "proxy page" to be served by one of the domains? (I.e. that could be implemented by a JavaScript library without a supporting server?)
This javascript library appears to provide the functionality you're looking for (i.e., supports cross-origin communication between browser tabs). I have not used this yet, but will be trying this out in my application. Check out https://github.com/wingify/across-tabs.
I'd probably chose to create a backend API service as a common communication tunnel between the 2 different websites.
Eg.
Site-A send a POST message to https://your-API-service
When Site-B asks for an update to https://your-API-service
Then API service returns the message previously sent from Site-A
If you need real-time communication you can also use WebSockets or push notifications
The window.PostMessage API is what you're looking for.
https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage
The window.postMessage() method safely enables cross-origin communication between Window objects; e.g., between a page and a pop-up that it spawned, or between a page and an iframe embedded within it.
I am looking at express.js for the back end and JS on the client side.
My app is a single page Web App.
The server will only serve JSON messages and my question is about "routing" for express.
Is one supposed to use routing to connect the UI and the server side business logic?
How will that work with my single page app?
so lets say, the client makes an Ajax call to the server looking for a value in the database and there is server side script that provides the JSON back to the UI. How is this UI and node script relationship setup?
Can someone shed some light on this?
Single page apps are those that live on a single HTML document. This means that if you want to display some different content to the user, depending on the state of the application, you will need to do some DOM manipulation (cutting out and replacing certain elements of the current document with different HTML) in order to update the 'view' that the user sees. Excuse me if this is obvious to you, please don't take offense. I figured I'd start from here. Hang with me and I'll explain how your routing situation is going to play out (more or less).
URLs are composed of a few different parts, each of which informs the browser of a particular bit of information that is required in order to download the resource that the user is attempting to access. Typically the resources that you are looking for are off on a server somewhere and the browser knows this because of pieces in the URL like 'protocol' ('http:') and 'host' ('www.mydomain.com'), so it goes off to that server to find what you're requesting. There are also 'query' parameters in URLs which provide some additional information to the server regarding a particular action, like the search terms of a search query. After the query parameters, comes the 'hash'. The hash is where the magic of single page apps happens... eh, well, kind of.....
First a bit about the hash. When you add a '#' to a URL, the browser then interprets the information that comes after it to be some location (element) within the currently displayed document. That means, if you have an element with an 'id' of 'main' and you add '#main' to the end of the URL, like so: 'http: //www.example.com#main', the browser will 'scroll' (typically 'jump') to the beginning of that element, so that the you can see it. Be aware, though, that if you type 'http://www.example.com/#main' (with the hash separated from the URL by a slash) then you will force a complete page reload and the browser will attempt to find a file by the name '#main' on the server (I bet it doesn't find it).
The takeaway here is that the browser will not attempt to navigate away from the current document if there is a hash in the URL, the exception being of course the case mention above, and this is great because single-page apps don't want to navigate away from the page or request a new document from the server. (See how routing is different for single-page apps?)
Now, this whole thing about the hash isn't vital to single-page apps, as you could make one without dealing with it all. A bunch of click handlers and DOM manipulation is all you'd need really... But, that would mean that users will have no way of sharing links to particular views in your app. The URL would never change, and we would never be able to navigate to any particular view directly. We'd always be starting from the starting position of your app, which could easily be a very annoying situation.
If your single-page app is going to have different views, and you want users to be able to navigate directly to particular ones via bookmarks or links, then you will need to implement a form of routing on the front-end in addition to the routing that you'll need to implement on the backend (routing for data API, etc.), which means that you will need to make use of the hash.
I don't want to get into how different frameworks accomplish routing on the front-end, but it's basically a matter of updating the browser's address field when the user clicks a link, and watching the address bar to determine what the current URL is and loading the HTML that is associated with that URL into the DOM in the designated location in the document tree.
So, within a single-page app, you have one route on the server that deals with rendering the app HTML document (index.html), and you have routes that are responsible for dealing with the data of your app (creating new instances in the database, logging in and out, editing or destroying instances in the DB, and fetching data...) which are called via AJAX requests.
This is actually a fairly complicated situation in that HTML5 allows us to be able to forgo the hash (with the help of some link rewriting on the server) and also be able to use the 'back' and 'forward' buttons as if we've actually navigated away from the original document (which we haven't because we have only pointed the browser to the exact same URL, only with modified hash values, so no new page loads have occurred). Traditional site navigation and linking can be achieved by utilizing the browser's History API, which is available for IE beginning with version 10 (I believe), the rest of the big browser vendors were already on to it quite a bit earlier, so frameworks that leverage this technology will allow your users to navigate your app without the hash in the URL. Explaining this is a diversion and not necessary for understanding routing in single-page apps, but it is interesting and you'll have to learn it eventually anyway, probably..
AJAX should be used to request JSON from the server. AJAX requests will always hit your server because you don't include the hash symbol in AJAX requests (it would be ridiculous to do so because the hash is meant only for in-document browsing), so server-side routes must be responsible for exposing your data API (consider a RESTful one). While this is not their sole purpose in single-page apps, it is perhaps their most important one.
Soooo, to wrap it up, you will have two sets of routes. One on the client (as part of a client-side framework like AngularJS or EmberJS, the list goes on... I prefer Angular, but there is a fairly steep learning curve for that one.), and one on the server. When you think about 'server routes' think data API. When you think of 'page routing', remember that this all gets handled on the client, by the javascript that you delivered with the initial server response (this is the one and only necessary server-side route involved with rendering HTML to the browser, loading your 'index.html' and all of the necessary scripts and stylesheets, etc). You will use express.static middleware to serve static files, so you don't have to worry about assigning routes for that stuff.
EDIT A quick mention of AJAX implementation.
On the server, you will have routes similar those that Alex has provided as examples and you will make calls to those URLs from the client using whatever XMLHttpRequest (XHR) object is exposed by your framework or library of choice. It is now considered more or less standard and best practice for frameworks/libraries to implement these requests as Promises http://wiki.commonjs.org/wiki/Promises/A. You should read a bit about it on your own, but I might be able to summarize it by saying that it is an asynchronous operation analogous to 'try, catch, throw' in synchronous operations. You will instantiate a promise object and through it you will attempt to load data from the server, for instance, via GET request. Make sure that you have assigned functions to handle requests made to the URL that you made the request to (server-side route)! This object that you instantiate and subsequently make the request to the server through, promises to return the result of the request to you once it comes back from the server (no matter whether it was successful or not) If it is successful, it will call a function that you have written and will supply it with the data from the server. If it fails, it will call a different function, also written by you, and will supply it with the error object (or 'reason' for failure), so you can handle the error appropriately.
Hope that helped answer your question.
You only have to route requests you serve dynamically. Your HTML, CSS, JS are all static assets. So all you need to handling routing for is your data.
It sounds like you want a Restful API, which basically means that you have URLs for specific resources, and HTTP verbs for manipulating them.
Something like:
GET /books.json - Get all books
POST /books.json - Create a new book with properties passed in the body of the request
GET /books/123.json - Get book with id of 123
PUT /books/123.json - Update an existing book with properties passed in the body of the request
This blog post seems to show how to set this up in Express.
Once you have a sane API delivering JSON, you just make your AJAX calls use it based on what objects you want to fetch.