Writing JS code to mimic api design - javascript

We're planning on rebuilding our service at my workplace, creating a RESTful API and such and I happened to stumble on an interesting question: can I make my JS code in a way that it mimics my API design?
Here's an example to illustrate what I mean:
We have dogs, and you can access those dogs doing a GET /dogs, and get info on a specific one by GET /dogs/{id}.
My Javascript code would then be something like
var api = {
dogs : function(dogId) {
if ( dogId === undefined ) {
//request /dogs from server
} else {
//request /dogs/dogId from server
}
}
}
All if fine and dandy with that code, I just have to call api.dogs() or api.dogs(123) and I'll get the info I want.
Now, let's say those dogs have a list of diseases (or whatever, really) which you can fetch via GET /dogs/{id}/disases. Is there a way to modify my Javascript so that the previous calls will remain the same - api.dogs() returns all dogs and api.dogs(123) returns dog 123's info - while allowing me to do something like api.dogs(123).diseases() to list dog 123's diseases?
The simplest way I thought of doing it is by having my methods actually build queries instead of retrieving the data and a get or run method to actually run those queries and fetch the data.
The only way I can think of building something like this is if I could somehow, when executing a function, if some other function is chained to the object, but I don't know if that's possible.
What are your thoughts on this?

I cannot give you a concrete implementation, but a few hints how you could accomplish what you want. It would be interesting to know, what kind of Server and framework you are using.
Generate (Write yourself or autogenerate from code) a WADL describing your Service and then try do generate the Code for example with XSLT
In my REST projects I use swagger, that analyzes some common Java REST Implementation and generates JSON descriptions, that you could use as a base for Your JavaScript API
It can be easy for simple REST Apis but gets complicated as the API divides into complex hierarchies or has a tree structure. Then everything will depend on an exact documentation of your service.

Assuming that your JS application knows of the services provided by your REST API i.e send a JSON or XML file describing the services, you could do the following:
var API = (function(){
// private members, here you hide the API's functionality from the outside.
var sendRequest = function (url){ return {} }; // send GET request
return {
// public members, here you place methods that will be exposed to the public.
var getDog = function (id, criteria) {
// check that criteria isn't an invalid request. Remember the JSON file?
// Generate url
response = sendRequest(url);
return response;
};
};
}());
var diseases = API.getDog("123", "diseases");
var breed = API.getDog("123", "breed");
The code above isn't 100% correct since you still have to deal with AJAX call but it is more or less what you what.
I hope this helps!

Related

How to pass a json object from a Python function to Javascript with Eel where you can use and manipulate the json

This question may be a bit confusing, so let me give you some background. Eel is a Python module where you can take functions made in Python and use them in Javascript, and vice versa. What I want to do is take a json made from a Python function, put it in Javascript, and make a table based on the json that was taken from the Python side. Here's an example.
python.py
def json_example():
json = [
{
"key": "value1"
},
{
"key": "value2"
}
]
return json
js.html
<body>
<div></div>
</body>
<script>
function js_example() {
# This is where the function from Python is called
var json_obj = eel.json_example();
var tbl = $("<table/>").attr("id", "example_table");
$("div").append(tbl);
for(var i=0; i<json_obj.length; i++){
var tr="<tr>";
var td="<td>"+obj[i]["key"]+"</td></tr>";
$('#example_table').append(tr+td);
}
}
I tested both of these functions out separately with a few changes and they both work. However, here's where the problem starts. For some reason, the Javascript part is not getting anything from the function it calls from the Python code. the variable "json_obj" should be equal to the json I made in the Python function, but for some reason the return value of the function isn't creating tangible data that can be manipulated in the Javascript, basically returning nothing. And the eel transfer itself works as well. If you replace "return" with "print", it will print the json in the console.
Also, please don't tell me to just put the json itself in the Javascript. I have a reason for needing the json to come from the Python side.
So basically, here's my question: how do you get a Python function to create a value that can be manipulated in Javascript?
The problem is that when eel exposes a function what it actually does is it creates a new function that will return a promise containing the return value of your python function.
So you should have something like this instead:
let json_obj = '';
eel.json_example()(x => json_obj = x);
If you need more help on callbacks, refer to https://github.com/ChrisKnott/Eel.
Convert to json within Python if you're calling Python to begin with and send json to JS in the return value.
See: https://github.com/ChrisKnott/Eel/tree/master/examples/03%20-%20sync_callbacks
To do a synchronous operation that will take time to complete in Python and then return a value into JS, use;
let n = await eel.py_random()();
which is really
let mySlowReturnValueFromPython = await eel.myExposedPythonFunction()();
In fact I tried to code in my own promises and I was getting back garbage that looked like eel promises. The browser maintains a thread while you call this way, so the user can kick off a long Python operation and still interact with the GUI.
I'd note that you can still call things that update the GUI asynchronously. If you have a bunch of Python functions which return ready-made HTML as I do, then you can kick them all off in a row without waiting and they will update the divs whenever they return. I use this;
eel.expose(updateDiv);
function updateDiv(newData, divToUpdate)
{
var fieldToUpdate = document.getElementById(divToUpdate)
fieldToUpdate.innerHTML = newData
}
Then I call my Python function which gets the data synchronously, packs it up into a ready-made HTML chunk for the GUI, and then calls updateDiv from Python. I'm actually really enjoying the power that this "ping pong" interaction between a synchronous codebase and an asynchronous one give me when working with a GUI. Worlds better than arfing about with TK.
I hope this helps you and you can struggle less with it than I did. Once you understand how this works, Eel is really great. It apparently handles sync for you, just hand it a blank callback (or whatever black magic that is). What a great lib! Eel is just perfect for locally hosted GUI's. I'd like to see a better GUI framework than HTML/CSS/JS - the truth is that there really isn't one, those things are so well tested and stable, with so many available examples for whatever you could want to create.
I'd really like to see this become the native Python GUI solution. Browsers are extremely cross-platform and the only problem to solve when porting becomes interfacing the browser to Python.

Connection response properties are camel case

I have no idea why this is happening. I'm connecting my angular 4 app to a signalR hub from the hosting server and it works like a charm (version 2.2.2)
I'm now having to add a second signalR connection to another project and for some unknown reason the properties of the response are all camelCase in stead of PascalCase. The jquery.signalr-2.2.2.js file however expects them to be PascalCase and throws an error that the server version is "undefined".
"Undefined" is logical since he's looking for res.ProtocolVersion and that property does not exist on my deserialized response. I do however have a res.protocolVersion and that one holds the exact value that he needs.
I've been losing a lot of time on this, any help is seriously appreciated!
Edit: #rory-mccrossan
I thought as much and that's why I commented out the server side json serializer/formatter code, but to no avail.
I'm taking any suggestion where to look next
So after Rory's hint I searched some more on the internet and of course, someone else has run into this problem:
SignalR : use camel case
However, the solution isn't working for me :/
Then I found a similar solution, here however you'll always have the default contract resolver except when the object comes from a certain library -the one with your view models-.
https://blogs.msdn.microsoft.com/stuartleeks/2012/09/10/automatic-camel-casing-of-properties-with-signalr-hubs/
Note: This is not the perfect solution, but it is one that worked best for my scenario
So I came up with a new solution all together that ties in nicely with the existing code that is giving me the problem.
The other way to go around this is to let your app use the DefaultContractResolver. SignalR will now connect, but the rest of your application will break. To mitigate this in the solution I'm working in I used two simple extension methods.
Firstly I extended the HttpConfiguration class to swap out the formatter for a CamelCasePropertyNamesContractResolver
public static class HttpConfigurationExtensions
{
public static HttpConfiguration ToCamelCaseHttpConfiguration(this HttpConfiguration configuration)
{
var jsonFormatter = configuration.Formatters.OfType<JsonMediaTypeFormatter>().FirstOrDefault();
bool needToAddFormatter = jsonFormatter == null;
if (needToAddFormatter)
{
jsonFormatter = new JsonMediaTypeFormatter();
}
jsonFormatter.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();
jsonFormatter.SerializerSettings.DateTimeZoneHandling = DateTimeZoneHandling.Utc;
if (needToAddFormatter)
{
configuration.Formatters.Add(jsonFormatter);
}
return configuration;
}
}
The existing Web API will always return a HttpResponseMessage and that's why I could go about it in the way that I did.
Example of an api call
[Route("")]
[HttpPost]
public async Task<HttpResponseMessage> CreateSetting(Setting setting)
{
// the response object is basically the data you want to return
var responseData = await ...
return Request.CreateResponse(responseData.StatusCode, responseData);
}
I noticed every Api call was using the Request.CreateResponse(...) method. What I didn't see immediately is that Microsoft actually has foreseen all necessary overloads, this meant I couldn't just make my own Request.CreateResponse(...) implementation.
That's why it's called MakeResponse
public static class HttpRequestMessageExtensions
{
public static HttpResponseMessage MakeResponse<T>(this HttpRequestMessage request, T response) where T : Response
{
return request.CreateResponse(response.StatusCode, response,
request.GetConfiguration().ToCamelCaseHttpConfiguration());
}
}
The Response classes are the data structures that you want your api to return. In our api they all got wrapped into one of these structures. This results into an api with responses similar to the ones on the Slack API.
So now the controllers all use Request.MakeResponse(responseData)

Node.js design: multiple async functions writing to database using function passed as a closure

I am writing a standalone web scraper in Node, run from command line, which looks for specific data on a set of pages, fetches page views data from Google Analytics and saves it all in an MySQL database. Almost all is ready, but today I found a problem with the way I write data in the db.
To make thing easier let's assume I have an index.js file and two controllers - db and web. Db reads/writes data to db, web scraps the pages using configurable amount of PhantomJs instances.
Web exposes one function checkTargetUrls(urls, writer)
where urls is an array with urls to be checked and writer is an optional parameter, called only if it is a function and there is data to be written.
Now the way I pass the writer is obviously wrong, but looks as follows (in index.js):
some code here
....
let pageId = 0;
... some promises code,
which checks validy of urls,
creates new execution in the database, etc.
...
.then(ulrs => {
return web.checkTargetUrls(urls,
function(singleUrl, pageData) {
...
a chain of promisable functions from db controller,
which first lookup page id in the db, then its
puts in the pageId variable and continues with write to db
...
}).then(() => {
logger.info('All done captain!');
}).catch(err => {logger.error(err})
In the effect randomly pageId gets overwritten by id of preceeding/succeeding page and invalid data is saved. Inside web there are up to 10 concurrent instances of PhantomJs running, which call writer function after they analyzed a page. Excuse me my language, but for me an analogy for that situation would be if I had, say, 10 instances of some object, which then rely for writing on a singleton, which causes the pageId overwriting problem (don't know how to properly express in JS/Node.js terms).
So far I have found one fix to the problem, but it is ugly as it introduces tight coupling. If I put the writer code in a separate module and then load it directly from inside the web controller all works great. But for me it is a bad design pattern and would rather do it otherwise.
var writer = require('./writer');
function checkTargetUrls(urls, executionId) {
return new Promise(
function(resolve, reject) {
let poolSize = config.phantomJs.concurrentInstances;
let running = 0;
....
a bit of code goes here
....
if (slots != undefined && slots != null && slots.data.length > 0) {
return writer.write(executionId, singleUrl, slots);
}
...
more code follows
})
}
I have a hard time findng a nicer solution, where I could still pass writer as an argument for checkTargetUrls(urls, writer) function. Can anyone point me in the right direction or suggest where to look for the answer?
The exact problem around your global pageId is not entirely clear to me but you could reduce coupling by exposing a setWriter function from your 'web' controller.
var writer;
module.exports.setWriter = function(_writer) { writer = _writer };
Then near the top of your index.js, something like:
var web = require('./web');
web.setWriter(require('./writer'));

NodeJS, SocketIO and Express logic context build

I read a lot about Express / SocketIO and that's crazy how rarely you get some other example than a "Hello" transmitted directly from the app.js. The problem is it doesn't work like that in the real world ... I'm actually desperate on a logic problem which seems far away from what the web give me, that's why I wanted to point this out, I'm sure asking will be the solution ! :)
I'm refactoring my app (because there were many mistakes like using the global scope to put libs, etc.) ; Let's say I've got a huge system based on SocketIO and NodeJS. There's a loader in the app.js which starts the socket system.
When someone join the app it require() another module : it initializes many socket.on() which are loaded dynamically and go to some /*_socket.js files in a folder. Each function in those modules represent a socket listener, then it's way easier to call it from the front-end, might look like this :
// Will call `user_socket.js` and method `try_to_signin(some params)`
Queries.emit_socket('user.try_to_signin', {some params});
The system itself works really well. But there's a catch : the module that will load all those files which understand what the front-end has sent also transmit libraries linked with req/res (sessions, cookies, others...) and must do it, because the called methods are the core of the app and very often need those libraries.
In the previous example we obviously need to check if the user isn't already logged-in.
// The *_socket.js file looks like this :
var $h = require(__ROOT__ + '/api/helpers');
module.exports = function($s, $w) {
var user_process = require(__ROOT__ + '/api/processes/user_process')($s, $w);
return {
my_method_called: function(reference, params, callback) {
// Stuff using $s, $w, etc.
}
}
// And it's called this way :
// $s = services (a big object)
// $w = workers (a big object depending on $s)
// They are linked with the req/res from the page when they are instantiated
controller_instance = require('../sockets/'+ controller_name +'_socket')($s, $w);
// After some processes ...
socket_io.on(socket_listener, function (datas, callback) {
// Will call the correct function, etc.
$w.queries.handle_socket($w, controller_name, method_name, datas);
});
The good news : basically, it works.
The bad news : every time I refresh the page, the listeners double themselves because they are in a loop called on page load.
Below, this should have been one line :
So I should put all the socket.on('connection'...) stuff outside the page loading, which means when the server starts ... Yes, but I also need the req/res datas to be able to load the libraries, which I get only when the page is loaded !
It's a programing logic problem, I know I did something wrong but I don't know where to go now, I got this big system which "basically" works but there's like a paradox on the way I did it and I can't figure out how to resolve this ... It's been a couple of hours I'm stuck.
How can I refacto to let the possibility to get the current libraries depending on req/res within a socket.on() call ? Is there a trick ? Should I think about changing completely the way I did it ?
Also, is there another way to do what I want to do ?
Thank you everyone !
NOTE : If I didn't explain well or if you want more code, just tell me :)
EDIT - SOLUTION : As seen above we can use sockets.once(); instead of sockets.on(), or there's also the sockets.removeAllListeners() solution which is less clean.
Try As Below.
io.sockets.once('connection', function(socket) {
io.sockets.emit('new-data', {
channel: 'stdout',
value: data
});
});
Use once instead of on.
This problem is similar as given in the following link.
https://stackoverflow.com/questions/25601064/multiple-socket-io-connections-on-page-refresh/25601075#25601075

Using ordinary JavaScript/Ajax, how can you access rows and columns in a Sql DataTable object returned by a VB.Net web service?

Let's say you have a VB.Net web method with the following prototype:
Public Function HelloWorld() As DataTable
Let's say you have that returned to you in ordinary JavaScript/Ajax. How would you get individual rows out of it, and how would you get individual fields out of those rows (at least assuming it comes from an SQL query - I'm not sure if it makes a difference)? I'm not casting it into any particular type before it gets passed to the success function. To illustrate what I'm talking about, it would be kind of like doing this in AS3:
for each (var table:Object in pEvent.result.Tables) {
for each (var row:Object in table.Rows) {
ac.addItem(row["id"]);
}
}
except that I'm not necessarily looking to loop through them like that - I just want some way to get the information out of them. Thanks!
EDIT: For example, say you have the following function in a VB.Net web service:
<WebMethod()> _
Public Function Test() As DataTable
Return DBA.OneTimeQuery("SELECT id FROM visits WHERE id = 13")
// returns a one-row, one-column DataTable with the id field being set to 13
End Function
How would you be able to get 13 out of it once it gets into the JavaScript that's calling the web method? Everything I try just says "[object]".
You do not want to return a DataTable object in your web method. The best approach would be to develop a REST web method that returns JSON that represents the data you want to return, as JavaScript will be able to work with JSON directly. Here is a QA the directs you on how to do this. This is if you are using WCF, which it appears you are. I find it better to use ASP.NET Web API for RESTful web services.

Categories