Urls for REST methods - javascript

I'm trying to get my head around the correct way to represent the following methods in a restful manner...
Say I have x amount of servers and they run Windows, Linux and Unix.
Each server can have a ping, shutdown and user action ran against them.
The API has no knowledge of the server so a request would have to provide the IP address along with server type, and action type (which it does know about).
With that in mind, these simple URLs come to mind but aren't restful in the slightest:
/192.168.1.3/Linux/ping
/192.168.1.5/windows/shutdown
Should I go down the restful route? Or is the above ok for a simple web API?
If restful would it look this?
GET /servertypes/{servertypeId}/actions/{actionId}?serverip=192.168.1.4

This seems to make more sense to me:
GET /servertypes/{servertypeId}/{serverip}/{action}

In my opinion, you should rethink your design.
You're effectively overriding the semantics of HTTP methods by placing the command names in the URLs, which are meant to represent scoping information. GET requests should not lead to changes of state. Ignoring the underlying protocol is not considered RESTful.
What you have here looks like an RPC model that doesn't really fit the principles of REST. It seems to me that you're trying to hammer a square peg into a round hole.
You should also ask yourself if it's really necessary to expose the underlying OS. Do you want to call system-specific commands? Should it be possible to run any executable or just a couple of common ones? Should you care if it's grep on a UNIX machine or findstr on a Windows box? Does the client care if they use a Windows or a Linux machine?
You could go for a simple pattern similar to what Uzar Sajid suggested. It would probably work alright. Just don't call it RESTful cause it's not.

Related

GraphQL and dataLoader on separate applications/servers

I plan to implement a GraphQL API in .NET on IIS and dataLoader API as a Node.js app server. GraphQL will interface to dataLoader to SQL Server.
All applications will be on a single physical server for now, but may possibly be separated in the future if scalability requires.
My reasons for this:
Existing code depends on IIS/COM/DCOM/ActiveX/.NET/ASP/ASPX
Simpler to implement and reason
Access control (web server doesn't need to see dataLoader code and ACLs can be implemented in dataLoader)
Makes it easier if I get the chance to interface with a different db (redis, mongodb, etc)
I can gradually slice and port parts of the code to allow easier code sharing (with separate Linux servers)
(I like) Node.js open to exploration, but cannot opt-in yet
First off, does this make sense or am I asking for trouble?
Would it make sense to use a binary serialization format between GraphQL and dataLoader? Or perhaps just a simple web service would be simpler?
Am I risking performance problems from more round-tripping? (Question too open-ended? Intuitively it seems like this would scale better eventually)
Is there a need for explicit authentication between GraphQL and dataLoader? Or can I just send session data (with username) through as-is and just let dataLoader trust the username given as context? Maybe pass a token? Are JWT tokens useful here?
GraphQL-dotnet has matured a bit since then and is looking pretty good.
I've since looked into solutions like AWS's API Gateway GraphQL support, and some Azure Functions solutions that support GraphQL.
Some of the techniques and design choices involved in these things were helpful here and there. But due to practical reasons, this never really came to fruition and most of these concerns never became relevant.

Automating HTTP Requests

I work with a team whose only way to get a user in their company's database, is to navigate through and fill out ~5 or so pages of web forms in their browser. Truly brutal stuff. I've developed web automation scripts in VBScript, Java (w/ Selenium WebDriver) and iMacro, but all of these solutions are slow. They also depend on the browser, which I'm trying to move away from.
I'm looking for a new platform, possibly some scripting technique/language that will allow me to issue HTTP requests and read HTTP responses, then build my script around there. The script would perform calculations on the HTTP responses, use File I/O and use this data to issue further HTTP requests. Again, I'm just spitballing here. If anyone else has a better solution, I'm all ears!
My question for you is: Accepting the team's limitations (read-only DB access), how would you approach a solution and what tools/languages/platforms would you use to do so?
Broad and ambiguous answers are welcome. Thank you for your time.
I agree with #Grisk on using NodeJS/ioJS as a platform. It is a powerful tool designed from the ground up for I/O, making it perfect for solving your problem. Additionally, the node community is extraordinarily vibrant, with npm, the nodejs package manager, hosting thousands of easily accessible modules. To avoid any future confusion: don't mistake NodeJS for a language or a backend framework; it is a native javascript interpreter built atop Google's V8 engine as well as a set of built in modules to build powerful I/O applications. Read up about node online.
As for your specific problem, I'd say you have two options:
To feign being a browser using phantom cookies
By programmatically navigating through the website as you have been doing.
As for the former option, you'd need to manually determine which cookies are sent to the server when forms are submitted on each page and then in your script generate these cookies and include them in the http request. Check out the nodejs http documentation for more information on customizing the headers of requests.
You're header will need to look something like this:
var headers = {
'host': < website host address here > ,
'origin' : <website origin here>
'referer' : <website origin here>
'User-Agent': 'Opera/9.52 (X11; Linux i686; U; en)',
'Cookie': <cookie sent over by server here>
}
I recently came across the node-icloud library, which uses the first method I describe above to provide programmatic access to one's icloud account. I strongly suggest reading through its code to see how it works here.
Additionally, I'd suggest reading up about http headers here
For the second option, check out phantomjs and zombiejs. Phantom is nice because it works without a browser. I'm not sure how the speed of these two libraries compare to what you have already been doing, but they are worth testing out.
One last thing: I would recommend building a custom (JSON)DSL for automating interaction with webpages so you can very easily redesign your browser interaction workflows.
Additionally, if you choose to use nodejs, an understanding of node streams and the details behind its event loop would be beneficial.
Best of luck!
I would start looking into NodeJS as a platform. The HTTP library is an incredibly powerful method for writing applications that need to make multiple http requests with unusual structure and it can communicate easily with a browser or basically anything else you could possibly need. Look at using the FileSystem class if you need to do file I/O.
If you wanted to get really fancy and use websockets to build a dynamic webapp that you can use as a front-end for your tool, you could even do that, so there's a lot of flexibility.

How to achieve realtime updates on my website (with Flask)?

I am using Flask and I want to show the user how many visits that he has on his website in realtime.
Currently, I think a way is to, create an infinite loop which has some delay after every iteration and which makes an ajax request getting the current number of visits.
I have also heard about node.js however I think that running another process might make the computer that its running on slower (i'm assuming) ?
How can I achieve the realtime updates on my site? Is there a way to do this with Flask?
Thank you in advance!
Well, there are many possibilites:
1) Polling - this is exactly what you've described. Infinite loop which makes an AJAX request every now and then. Easy to implement, can be easily done with Flask however quite inefficient - eats lots of resources and scales horribly - making it a really bad choice (avoid it at all costs). You will either kill your machine with it or the notification period (polling interval) will have to be so big that it will be a horrible user experience.
2) Long polling - a technique where a client makes an AJAX request but the server responds to that request only when a notification is available. After receiving the notification the client immediately makes a new request. You will require a custom web server for this - I doubt it can be done with Flask. A lot better then polling (many real websites use it) but could've been more efficient. That's why we have now:
3) WebSockets - truely bidirectional communication. Each client maintains an open TCP connection with the server and can react to incoming data. But again: requires a custom server plus only the most modern browsers support it.
4) Other stuff like Flash or Silverlight or other HTTP tricks (chunked encoding): pretty much the same as no 3). Though more difficult to maintain.
So as you can see if you want something more elegant (and efficient) than polling it requires some serious preparation.
As for processes: you should not worry about that. It's not about how many processes you use but how heavy they are. 1 badly written process can easily kill your machine while 100 well written will work smoothly. So make sure it is written in such a way that it won't freeze your machine (and I assure you that it can be done up to some point defined by number of simultaneous users).
As for language: it doesn't matter whether this is Node.js or any other language (like Python). Pick the one you are feeling better with. However I am aware that there's a strong tendency to use Node.js for such projects and thus there might be more proper libraries out there in the internets. Or maybe not. Python has for example Twisted and/or Tornado specially for that (and probably much much more).
Websocket is an event-driven protocol, which means you can actually use it for truly real-time communication.
Kenneth Reitz wrote an extension named Flask-Sockets that is excellent for websockets:
Article: introducing-flask-sockets
Github: flask-sockets
In my opinion, the best option for achieving real time data streaming to a frontend UI is to use a messaging service like pubnub. They have libraries for any language you are going to want to be using. Basically, your user interfaces subscribe to a data channel. Things which create data then publish to that channel, and all subscribers receive the publish very quickly. It is also extremely simple to implement.
You can use PubNub and specifically PubNub presence meant especially for online presence detection. It provides key features like
Track online and offline status of users and devices in realtime
Occupancy to monitor user and machine presence in realtime
Join/Leave Notification for immediate updates of all client connections
Global Scale with synchronized servers across the PubNub Data Stream Network
PubNub Presence Tutorial provides code that can be downloaded to get presence up and running in a few minutes. Five Ways You Can Use PubNub Presence shows you the different ways you can use PubNub presence.
You can get started by signing up for PubNub and getting your API keys.

Most viable WebSocket/Perl solution

The company I'm working at uses Perl for all "backend related" stuff. However, we would like to use some real-time communication between server-processes and connected clients via browser.
We are also using Apache as Webserver with mod.perl. So that is my first question, I don't see any practical way to combine a WebSocket-Server in that constelation. Maybe there is one I didn't found yet?
The only thing which really works serious about that topic, is Mojolicious. However I'm not that experienced with that yet, so I'd be happy if someone could state if I can use that in my current mod-perl environment. I think I also would have to let this run as standalone webserver process, No?
Which brings me to my second question. What is the best practice, if you have multiple perl files, which do certain things running on Apache/modperl, but you want to keep all your connected users informed about things. What I mean is, all these scripts are accessed via XHR, but some actions require other users to get informed. Currently, we do a classic ajax polling.
The problem I'm struggling around with is, that if there is a dedicated websocket server, which runs independently, all those scripts would need to somehow communicate with this process as well right ? How would one do that? Pipes? Sockets? Shared memory ?
Theoretically, if I would choose to go with such an independent ws server solution, I could write it in any language right? Could even be Ruby or Node. I'm just wondering if that is the best way or if there is a good solution which is more integrated in existing perl/modperl constructs.
TL;DR
Is it best practice to have a standalone, independent web-socket server which communicates with the rest of your Apache/modperl scripts aswell as with its connected clients ?
You could look on AnyEvent CPAN module:
http://metacpan.org/pod/AnyEvent
With it you can write your own standalone event-driven WebSocket-server, also you could find a lot of examples in google or in AnyEvent's perldoc.

Designing Javascript frontend <-> C++ backend communication

In my nearest future I will have to make a system with C++ backend and web frontend (requirements). At the moment, I don't know much more about it. I think that Frontend will be triggering data delivery, not backend - so no need for Comet-like things.
Because of possibly little experience in this field, I'd really appreciate your comments about design decisions I made.
First of all, I don't like the option of generating HTML from C++.
So, C++ backend will have to communicate with Javascript frontend. Simplest option I see here is Ajax. I think it should be ok, so far.
Commucating through Ajax with C++ backend means that backend should be capable of handling HTTP. It'd be nice to separate backend which provides actual data from HTTP handling functionality.
Here I see the place for Node.js. I got an overview of it and this's the place where all my doubts lie.
To have a HTTP handling server on Node.js, which will have the 'data backend' as a Node.js module? I think, it should be ok - but I'm not sure that I really need all this asynchronization, so there may be some simpler options I'm not aware of? How would you make such a system?
Thanks in advance.
"All this asynchronization" is not something that Node.js works very hard to provide as an extra. It is a different view of Web serving that is easy as breathing once you understand how Node.js works.
For example, my colleagues needed a way to wrap a C++ program as a web service, but the program had a significant start-up cost, so they wanted to run just one instance of the program, running in a loop, serving all the web requests. The whole thing in Node.js took less than two screenfuls.
Wrapping a single program that is called for each request can be done in less than ten lines of Node.js. Don't think of asynchronicity as a chore - if you embrace it, Node.js is awesome.
That said, you could go the CGI route, and do it in a bit more standard way, and the end result would be pretty much the same. This may or may not come in handy.
Did you consider CGI/FCGI module option with nginx, Apache, etc. ?
If not then I think it makes sense to start from it. Your module will handle data/json request and the rest will be handled by HTTP server.

Categories