Smart way to reuse code for creating and updating entity? - javascript

I am currently developing a GraphQL API with NodeJS and I am looking for a smart way to share code between create and update entity functions.
Context
In my application, users are able to create and update their own flights (as a pilot) so I have two GraphQL input fields (CreateFlightInput and UpdateFlightInput) that are called from two different mutations.
Problem
When a user creates a flight, he must provide the plane he flew with. As the plane is represented by a Mongo ID, the API needs to check if the plane exists and if the user can see it. But when the user updates a flight, the same check is required (because the user can change the plane he used). As I am using two resolvers (mutations), I don't want to write the same code twice especially since I have the same problem for the passengers' field. You will ask me that if all the checks are the same, why am I using two different resolvers? The problem is that the server must perform certain actions during create but not during update. To summarize, we have two resolvers sharing some similar code but not only.
Do you have an idea of where I could write that code that I can reuse for both? I was thinking of a function with a boolean that indicates if we are editing or not but I would know if there is (are) better method(s).
Thanks for your help.

Related

In clean architecture between entity layer and use-case layer. What is the use-case boundary?

As I understand Entity caries fundamental properties, methods and validations. An User Entity would have name, DoB...email, verified, and what ever deemed 'core' to this project. For instance, one requires phone number while a complete different project would deem phone number as not necessary but mailing address is.
So then, moving out side of Entity layer, we have the Use-Case layer which depends on entity layer. I imagine this use-case as something slightly more flexible than entity. So this layer allows us to write additional logic and calculations while making use of existing entity or multiple entities.
But my first uncertainty lies whether use-case can create derived values(DoB => Age) from entity properties and even persist them in storage or does it need to already exist in User Entity class already ? Secondly, does use-case layer even NEED to use entity, could I have a use-case that literally just sums(a, b) and may or may not persist that in storage anyways? Thirdly, when persisting entity related properties into database, does retrieving them require once again validation, is this redundant and hurting performance?
Finally, the bigger quesetion is, what is use-case ? should use-case mean to be adaptable by being agnostic of where the inputs comes from and what it serves to? does this just mean the inversion dependency removes the responsibility of what framework it ties to. I.e. using express or Koa or plain http wouldn't force a rewrite of core logic . OR does it mean adaptable to something even greater? like whether it serves directly at the terminal-related applications or api request/response-like-applications via web server?
If its the latter, then it's confusing to me, because it has to be agnostic where it gets/goes, yet these outputs resembles the very medium they will deliver to. for instance, designing a restFUL api, a use case may be...
getUserPosts(userId, limit, offset). which will output a format best for web api consumers (that to me is the application logic right? for a specific application). And it's unlikely that I'll reuse the use-case getUserPost for a different requestor (some terminal interface that runs locally, which wants more detailed response), more or less. So to me i see it shines when the times comes to switch between application-specific framework like between express/koa/httprequest/connect for a restapi for the same application or node.js/bun environment to interact with the same terminal. Rather than all mightly one usecase that criss-cross any kind of application(webservice and terminal simultaneously or any other).
If it is allmighty, should use-case be designed with more generalized purpose? Make them take more configurable? like previously i could've add more parameters getUserPosts(userId, limit, offset, sideloadingConfig, expandedConfig, format: 'obj' | 'csv' | 'json' ), I suppose the forethought to anticipate different usage and scaling requires experience - unless this is where the open-close principle shines to make it ready to be expandable? Or is it just easier to make a separate use-case like getUserPostsWebServices and getUserPostsForLocal , getPremiumUsersPostsForWebServices - this makes sense to me, because now each use-case has its own constraints, it is not possible for WebServieces to reach anymore data fetch/manipulation than PostsForLocal or getPremiumUsersPostsForWebServices offers. and our reusability of WebServices does not tie to any webserver framework. I suppose this is where I would draw the line for use-case, but I'm inexperienced, and I don't know the answer to this.
I know this has been a regurgitation of my understanding rather than a concrete question, but it still points to the quesiton of what the boundary and defintion of use-case is in clean architecture. Thanks for reading, would anyone chime to clarify anything I said wrong?
But my first uncertainty lies whether use-case can create derived values(DoB => Age) from entity properties and even persist them in storage or does it need to already exist in User Entity class already ?
The age of a user is directly derived from the date of birth. I think the way it is calculated is the same between different applications. Thus it is an application agnostic logic and should be placed in the entity.
Defining a User.getAge method does not mean that it must be persisted. The entities in the clean architecture are business object that encapsulate application agnostic business rules.
The properties that are persisted is decided in the repository. Usually you only persist the basic properties, not derived. But if you need to query entities by derived properties they can be persisted too.
Persisting time dependent properties is a bit tricky, since they change as time goes by. E.g. if you persist a user's age and it is 17 at the time you persist it, it might be 18 a few days or eveh hours later. If you have a use case that searches for all users that are 18 to send them an email, you will not find all. Time dependent properties need a kind of heart beat use case that is triggered by a scheduler and just loads (streams) all entities and just persists them again. The repository will then persist the actual value of age and it can be found by other queries.
Secondly, does use-case layer even NEED to use entity, could I have a use-case that literally just sums(a, b) and may or may not persist that in storage anyways?
The use case layer usually uses entities. If you use case would be as simple as sum two numbers it must not use entities, but I guess this is a rare case.
Even very small use cases like sums(a, b) can require the use of entities, if there are rules on a and b. This can be very simple rules like a and b must be positive integer values. But even if there are no rules it can make sense to create entities, because if a and be are custom entities you can give them a name to emphasize that they belong to a critial business concept.
Thirdly, when persisting entity related properties into database, does retrieving them require once again validation, is this redundant and hurting performance?
Usually your application is the only client of the database. If so then your application ensures that only valid entities are stored to the database. Thus it is usually not required to validate them again.
Valid can be context dependent, e.g. if you have an entity named PostDraft it should be clear that a draft doesn't have the same validation rules then a PublishedPost.
Finally a note to the performance concerns. The first rule is measure don't guess. Write a simple test that creates, e.g. 1.000.000 entities, and validates them. Usually a database query and/or the network traffic, or I/O in common, are performance issues and not in memory computation. Of cource you can write code that uses weird loops that mess up performance, but often this is not the case.
Finally, the bigger quesetion is, what is use-case ? should use-case mean to be adaptable by being agnostic of where the inputs comes from and what it serves to? does this just mean the inversion dependency removes the responsibility of what framework it ties to. I.e. using express or Koa or plain http wouldn't force a rewrite of core logic . OR does it mean adaptable to something even greater? like whether it serves directly at the terminal-related applications or api request/response-like-applications via web server?
A use case is is an application dependent business logic. There are different reasons why the clean architecture (and also others like the hexagonal) make them independent of the I/O mechanism. One is that it is independent of frameworks. This makes them easy to test. If a use case would depend on an http controller or better said you implemented the use case in an http controller, e.g. a rest controller, it means that you need to start up an http server, open a socket, write the http request, read the http response, extract the data you need to test it. Even there are frameworks and tools that make such an test easy, these tools must finally start a server and this takes time. Tests that are slow are not executed often, are they? And tests are the basis for refactoring. If you don't have tests or the tests ran slow you do not execute them. If you do not execute them you do not refactor. So the code must rot.
In my opinion the testability is most import and decoupling use cases from any details, like uncle bob names the outer layers, increases the testability of use cases. Use cases are the heart of an application. That's why they should be easy testable and be protected from any dependency to details so that they do not need to be touched in case of a detail change.
If it is allmighty, should use-case be designed with more generalized purpose? Make them take more configurable? like previously i could've add more parameters getUserPosts(userId, limit, offset, sideloadingConfig, expandedConfig, format: 'obj' | 'csv' | 'json' )
I don't think so. Especially the sideloadingConfig, format like json or csv are not parameters for a use case. These parameters belong to a specific kind of frontend or better said to a specific kind of controllers. A use-case provides the raw business data. It is the responsibility of a controller or better a presenter to format them.

Good practices for using the same form components for both create and edit pages

I have this crowded form with some 30-40 fields, and most of them have kinda complex logic. Understandibly I wanted to make these form areas reusable for both create and edit pages since both are identical in terms of form items and fields. I got them to work without much effort, and they work fine for most parts, except the quest for reusability introduced some other challenges and now I am questioning if it's worth it.
For me some of the challenges of using the same forms for both create and edit are:
Conditionals everywhere. Want to make a field uneditable in edit mode? Extra props to pass in and disable said fields with those props.
Edit mode needs initial state so forms fields are filled in with existing data but create page does not like that, so even more and even worse conditions like:
const { user } = React.useContext(DetailContext) ?? {}; // I know...
const [userType, setUserType] = useState(user?.user_type ?? null); // yep
At this point I can't even decide how to safely assign and use stuff, and I can't estimate what parts are prone to breaking.
Is this inevitable and fine as is or is this complete silliness? What are some good practices for making form areas or forms reusable? Can I reduce noise without sacrificing reusability?
The question is indeed kind of broad but a general solution might still help you. There are a couple of main points you need to consider:
Are all the fields present in both edit and create mode or there might be situations where not all of them are included in either?
Are there fields that are supplied on creation but cannot be edited after that(in edit mode)?
Are there initial values for any of the creation form fields?
I would tackle the problem in the following manner:
You'll have a single form component which accept field properties as parameter(component props). For each field you supply the following information:
Initial value, if any (initialValue)
If the field should be present in the form at all (isDisplayed)
If the field should be present but disabled (isDisabled)
This way you cover all of the previously mentioned points while keeping a single form component. Also, you'll supply a parameter like mode with possible values edit/create which guides the form logic and how the form submit would be handled(difference in handling logic, different endpoints and such). Then, in your form component you'll have to do some checks based on the provided parameters i.e. should the given form input be disabled if isDisabled is true or if it should be displayed at all - isDisplayed.
It's not a perfect solution, but it is flexible enough to enable you to use the same form template and share most of the handling logic. If, however, you have vast differences in how the edit/create forms are structured, you might be better off to duplicate some logic by introducing two separate components but save your sanity and time figuring out how to cover and abstract every single new extenstion you want to introduce in one of them.

Web interface to allow clients to write and execute conditional statements

I'm writing a web-based system using flask with react and redux that needs to have web-based clients write conditional statements that can be saved to a configuration file, but also executed in real time without restarting the server or other services.
Obviously this can be done using eval(), but obviously we won't be using that.
Any safe ways to run user conditions that call on live variables to perform calculations?
As an example they might want to perform a standard conditional calculation like:
if(a===1 && (b===2 || c===2)){
//do something
}
Where a, b, and c are values that are provided from the server to the client and change dynamically.
UPDATE based on question:
The server provides real-time updates on alarms monitored by the server. When an alarm changes state - say from no-alarm to in-alarm - it sends the new data to the client.
The client side renders this information as a list of alarms. The list can be filtered easily enough, but one issue is that you can have an alarm flood event where ~1000 alarms all come in simultaneously. You also have a few standard/common events where a particular series of alarms all change to a particular state at the same time and that indicates a particular issue/fault and hence a particular fix.
Each user is unique, so it can;t be a one-size-fits all approach and it would be useful if each user can set some basic rules that determine what message to display based on the value of any combination of alarms and their alarm state. They would use a browser form to select these condition states which they can submit to the server. This will insert a line into thier personal configuration file held on the server so that each time they log in they automatically have access to these calculation.
if an alarm changes state, it is sent to the client, which then automatically performs the calculation in the background to determine if a message needs to be displayed.
Appears like the best approach is to allow the strings, but to ensure that all strings are parsed through a super well constructed function to remove anything malicious.
I'll probably run this function many times - first on submission, and also just before executing.
If these functions are developed using an interactive GUI of dropdown buttons and auto-fill input fields that constructs the code for the user (i.e. the user doesn't actually write the code), it should be safe enough.

Should updates to Firstore items in AngularFire be done through the AngularFirestoreCollection?

In my app, I have a list that requires an "or" condition. But, as the docs say:
In this case, you should create a separate query for each OR condition and merge the query results in your app.
As a result, in my service, I'm managing two queries and surfacing them as a single observable list to consumers.
The problem comes in with updating. I have the choice of doing extra work to match up the item needing update to the correct collection so I can do the following:
myCollection.doc(item.id).update(item);
or I can make this much more simple and just:
angularFirestore.doc(`path/to/${item.id}`).update(item);
I'm operating under the assumption that the first method will result in faster updates as I'm using the same reference that it would optimistically update instantly. And that the latter will be slower in that it would be more round about by updating the persistence layer and then the collection referencing getting notified about later (probably still a small time).
All of the above is assumption, however. I back this just with a few random instances where I've seen it take a second or two for an update or delete to show up in an other part of the view, but I haven't been able to actually inspect the process.
Does anyone know if the above is correct? Should I be doing the extra work to write through the collection references or does angularfire(and/or firestore) handle this and make them effectively the same operation under the hood?
AngularFire2 is a thin wrapper around RxFire, which itself is a relatively thin wrapper around the Firebase JavaScript SDK.
There should be no significant performance difference between updating a document through AngularFire or updating it directly through the JavaScript SDK. In both cases the majority of the time is spent in the JavaScript SDK, and on the wire between the client and server. For this reason I typically update directly through the JavaScript SDK, since it's often a bit more direct and the AngularFire abstraction has little advantage for me in write operations. Given that AngularFire is built on top of this SDK, it picks up the changes instantly even when they're not made through AngularFire.
If you have an instance where this does not seem to be the case, I recommend creating a question with the minimal, complete/standalone code that reproduces that problem.

Page Object Model or JavaScript testing alternatives?

Is the Page Object Model still the best way to automate web applications ?
For me the Screenplay pattern seems a nicer way to manage it.
Or using JavaScript alternatives like Acceptance Testing with Cucumber.js & WebdriverIO for example.
Your question makes it sound like there is only one answer, but actually you can merge these all together if you code it correctly.
Page Object Model
This is good for separation of the elements on the page from the rest of your code. If implemented correctly, a change on the page can be corrected for all scenarios using that page by simply changing one line in the POM.
nextButton.click();
fullName.sendKeys("John Doe");
Screenplay Pattern
This is good for separation of actions that occur on the different pages, the different workflows.
james.attemptsTo(
goToTheNextPage(),
fillOutHisDetails()
);
If the journey has workflow that is slightly changed, the idea is that you can simply reorder the Screenplay pattern, or remove the actions that are no longer necessary.
In the example above, if the business were to decide that the registration form should be a single page instead of multiple, it would make more sense to delete this single line:
goToTheNextPage(),
instead of deleting the 2 that I would have put in:
driver.findElement({css:"#next"}).click();
driver.findElement({css:"#registrationDetails"});
CucumberJS + WebdriverIO
This is good to portray the information of a scenario in pure business language.
Scenario: I am filling out the registration form
Given I am a new user
And I want to register for an account
When I fill out the registration form
Then I should be able to log in
Merging them
If you want truely human readable code, you can merge all 3 of these.
You have your feature file at business level language.
You have your step definitions written with the Screenplay pattern in mind.
You have the Screenplay pattern steps written with the Page Object Model.
This may seem like a lot of layers, but it means that the business will be able to understand the scenarios, the testers and developers looking back over the code will understand the workflow of a certain journey, and looking further into the code on debugging will allow the tester to change the element values across multiple journeys by only changing one line of code.

Categories