Create a Javascript API Class with multiple base urls - javascript

My backend includes multiple microservices, each with its own base url. At the moment I have the user and the metadata services, but this could expand in the future.
I have a React app and I'm trying to create an API wrapper class to call when I need to modify something. My first approach was to create multiple api instances for each service and import as needed:
import userApi from '../userApi'
import metadataApi from '../metadataApi'
userApi.getUser(user_id)
metadataApi.getCollections()
But I'd like to use a different approach that wouldn't require keeping note where each entity is located in order to use it, like so:
import API from '../api'
API.getUser(user_id)
API.getCollections()
API.deleteUser(user_id)
But I'm not sure how I can achive this without bloating up the API class. Can I import an array of methods inside it and just attach them to the class prototype before exporting?
I want to find a suitable structure to better separate each entity and make it easier to build and modify it in the future.

To be honest, separating your API classes into separate files / modules is fine. It feels like a bit of an overhead when the app is small, but as it grows, it helps keeps things organised.
You have already indicated that your backend API's are structured into microservices, why not keep them separate entities in the front end too? It will be easier to manage your API classes when / if you ever come to start hitting different endpoints.
I have though, in the past, created a base class that each of those API classes may inherit from, where I can set up common logic, such as request headers etc, if you want to get some reuse that way.
I have even went a step further again which would create another level of abstraction that handles how the integration is happening, i.e. via HTTP, where I would declare which HTTP client to use for example. That way, if I ever change the HTTP client, I only change it in one place
That kind of structure looked like ->
_ServiceProxy.js
Common functions such as GET, POST, PUT, DELETE, etc.
HTTP client defined here
High level error handling defined here
_someBaseAPI.js
An an abstract client that would define how to interact with a set of common microservices, e.g. Auth logic etc
UserAPI.js
A concrete / static class, only interested in how to handle requests / responses to do with Users

You can define and export a separate component in which all api files will imported and use individual api in its functions, then you will be able to use its function for specific api.

Related

Best practice for using different implementations of data fetching code in React/Next.js app?

Coming from a .Net/C# background, a common pattern there is inversion of control/dependency injection, where we design an interface and then one or more classes that implement that interface. The various other classes that make up the app take in the interfaces that they need to do their job, and make use of the provided methods without worrying about internal implementation. It's then up to the app configuration to determine which one of the interface implementations should be used. This allows the same interface for things like CRUD operations on an API vs. a database, a file, etc.
To give a concrete example of what I'm trying to achieve in the Next.js world:
Let's say I have a todo app with a useTodos() custom hook that returns todos, and an api/getTodos.ts endpoint that also returns todos. Both the hook and the endpoint have in common that they both "get todos" from some data source and return them.
I want the ability to provide a common interface for getTodos() and provide several different implementations for it. And then a central point where I can control which part of the app uses which implementation. For example getTodos() could have the following implementations:
Calls an API
Query Supabase
Query MySQL
Query local storage
Read from a local file
I can then centrally define that the useTodos() hook uses the getTodos() that calls an API, and the api/getTodos.ts uses the getTodos() that queries Supabase. When we decide Supabase is no longer for us and we move to MySQL, I don't need to hunt down all the places where I'm fetching data, instead I just change my central configuration to use the new MySQL implementation.
I'm not looking for a 1:1 imitation of .Net IoC patterns using some obscure library but rather for how this sort of stuff is routinely done in the JS/TS/React/Next.js world. I'm also using 100% functional programming so I'm not looking for solutions that involve classes.
Thank you for reading this far!

Dynamically adding routes and components in Mithril

Mithril's website states:
You can only have one m.route call per application.
So I was almost able to do a work around with code splitting.
But my application is only aware of the first-level components for a given URL which it utilizes the async code splitting to accomplish.
Now the problem: Those first-level components would then like to register their own namespaced routes to leverage URL state change for their own inner components since I can't register their routes ahead of time (and Mithril prevents setting the routes again after the initial route was set by the app/wrapping component).
To further the complexity of the issue, each first level component is loaded on an as-needed basis, so I can't wait for all the first level components to load and then instantiate the m.route; routes have to be added dynamically.
I love this framework, but my use case seems like an edge case that I can't seem to resolve.
The simple solution would be to re-instantiate the m.route object after each first-level component loads, but that's not supported.
UPDATE
The purpose of my post was to find a native way to do dynamic routing and not lose functionality (such as variadic routing), but its been reinforced that that's not possible.
I replaced the entire router with an in-house one so I could support dynamic (and unknown) routes, more flexible variadic routing, better params method, and even provide get/set global data across views without a global window variable or use of the History API in that case. I still provide the rest of the functionality that Mithril does, just a little more simply.
Why not do a pull request? From what I've read on different Github pages, two big pieces of this would not work with the core logic of Mithril; and/or, is too much of an edge-case that they don't want to support it.
I'll still choose Mithril over any other framework though.
In the meantime, I built what I need, and hope Mithril 2 will have dynamic routing baked in.
Mithril's router is intended as a relatively simple solution to easily enable standing up simple SPAs, dynamically registering routes isn't part of the current design.
I think you'll probably be best served by finding a router that supports the dynamic route registration you require and using that.
Integration of that router with Mithril could be naive (using m.mount() when routes change) or more complex by emulating a bit of the logic of the existing router API.
Mithril's router is not the most advanced tool. Although you can work around it pretty much however you want.
There is a way to make new routes dynamic.
I made a little jsfiddle a while ago. https://jsfiddle.net/Godje/cpzLtoyz/
You are interested in lines 2-11 and 63-92.
Although they aren't dynamic in the fiddle, you can make a function, to replace that switch on line 73, which will process your routes and return a component needed to be rendered. That way if you have an array stream with all the URLs or other routes you want, you can have a function process each param on each route-change call and check it with the array.
Sorry for a messy response. Writing an exact solution to the problem requires a local server.

Best practice to change default blueprint actions in sails.js

I'm looking for an answer to what is the best practice to customize default sails.js CRUD blueprints. The simple example of what I mean is let's say I need to create SomeModel, using standard blueprint POST \someModel action, and also I want to get information about authenticated user from req object and set this property to req.body to use values.user.id in beforeCreate function:
module.exports = {
attributes: {
// some attributes
},
beforeCreate: function (values, cb) {
// do something with values.user.id
}
};
I'm very new to sails.js and don't want to use anti-patterns, so I'm asking for inputs on what is the right way to handle such cases. Also it would be great to have some good resources on that topic.
There are a few options open to you which you've identified, and as usual, it depends on your use case. Here are some thoughts on the three options you listed in your comment:
Override controller/create - Of the three, this is the best option because it doesn't clog up code in routes or policies for a single instance.
Write controller/action and add to config/routes.js - While this works, it defeats the purpose of using blueprints, and you have to do all the code in option 1 plus making your routes code messier.
Apply a policy for a single create action - To do this, you will not only have to clutter up your /policies folder but also your policies.js file. This is more code AND it puts the logic for one controller method in two separate places, neither of which is the controller.
Out of these three, option 1 is the best because it contains the code to its context (the controller) and minimizes written code. This is assuming that you only want to alter the create method for one model.
Side note:
As an example of how your use case determines your implementation, here is how I've set my Sails app up which has a few REST routes, each of which responds differently based on the user calling the route (with a valid Facebook token):
Only uses blueprint action routes (not blueprint REST routes)
First goes through the policies:
'*': ['hasFBToken', 'isTokenForMyApp', 'extractFBUser', 'extractMyAppUser'] which store fbuser, myappuser, and other variables in req.body
At this point, my controller function (e.g. controller.action()) is called and can access information about that user and determine if it has the correct permissions to do CRUD.
Pros of this system:
All code for each CRUD operation is contained within its controller function
Minimal to no code in routes.js (no code) or policies.js (one line) or /policies
Works well with API Versioning (e.g. controllers/v1.0/controller.js), which is much easier to do with versioned controllers than versioned models. That means that I can create a new API version and simply by creating a controller in /v2.0 with a function action(), calls such as POST /v2.0/controller/action will exist with no extra routing needed.
Hopefully this example helps illustrate how design decisions were made to offer functionality like API versioning and consolidate code in its specific context.

Where to format collections / objects

From front end architectural point of view, what is the most common way to store scripts that perform transformations on collections of objects/models? In what folder would you store it, and what would you name the file / function?
Currently I have models, views, controllers, repositories, presenters, components and services. Where would you expect it?
As a component (what would you name it?)? As a service? Currently I use services to make the connection between the presenter and the repository to handle data interactions with the server.
Should I call it a formatter? A transformer? If there is a common way to do, I'd like to know about it.
[...] models, views, controllers, repositories, presenters, components and services. Where would you expect it?
services, mos def. This is a interception service for parsing data.
Should I call it a formatter? A transformer?
Well, trasformer (or data transformer) is actually quite good IMO. data interceptor also comes to mind, and data parser, obviously.
If there is a common way to do, I'd like to know about it.
Yes, there is! Override the model's / collection's parse() function to transform the data fetched from the server into your preferred data structure.
Note that you should pass {parse: true} in the options to make it work.
This, of course, does not contradict using the services you wrote from within that function. You can encapsulate the parsing logic in those scripts, and reuse it anywhere you'd like.
Bare in mind that there will probably be very little code reuse when using parse(), as each transformation will relate to a single model or collection.

What's the most scale-friendly way to allow one node process to communicate between its own modules?

I've built a system whereby multiple modules are loaded into an "app.js" file. Each module has a route and schema attached. There will be times when a module will need to request data from another schema. Because I want to keep my code DRY, I want to communicate to another module that I want to request a certain piece of data and receive its response.
I've looked at using the following:
dnode (RPC calls)
Dnode seems more suitable for inter-process communication - I want to isolate these internal messages to within the process.
Faye (Pubsub)
Seems more like something used for inter-process communication, also seems like overkill
EventEmitter
I was advised by someone on #Node.js to stay away from eventEmitter if there are potentially a large amount of modules (and therefore a large amount of subscriptions)
Any suggestions would be very much appreciated!
Dependency injection and invoking other modules directly works.
So either
var m = require("othermodule")
m.doStuff();
Or use a DI library like nCore

Categories