Overall I'm happy with using Backbone.js for my company's frontend application. However I've noticed a lot of foundational problems that I wonder if anyone else encountered.
The biggest issue is that the frontend team does not control the API that powers our application. The objects passed are fairly complex in structure. Nested arrays, sub objects, etc.... This of itself is expected. The API serves a different purpose than the frontend. What each considers an "object" are completely different things.
In practice this leads to issues. Namely, one API endpoint may be broken into multiple frontend models. This is a common problem when dealing with APIs. It's typically addressed through Data Access Objects or a Data Access Layers that translate API objects into internal objects. Backbone by contrast expects models to be tightly coupled with the API endpoints. Sync operations on a model (i.e. save, fetch) immediately reach out to the API.
Adding to the issues, I seriously believe that toJSON in Backbone does too much. It's used to reproduce the model in a format that can be consumed internally; defines how the model should get posted to an API; and used for equality checks between models for many internal operations. Any of the three could get broken out into their own method.
Has anyone else dealt with this? What strategies did you use? Implement DAOs? Is there a fork of Backbone that accounts for these issues?
[edit]
A closer to real world case that I've encountered. When we query the API for results, there's about a hundred filters we can pass along with the request. The overall filter structure is pretty simple on that end an array of filter objects like so:
{
filterName: '',
// '~', '=', '<=', '>='
operator: '',
value: ''
}
For one particularly problematic filter, depending on the 'operator' we either allow the user to select a single option or we allow the user to construct a "sentence" from the available options. The former option renders as an HTML select, the latter we implemented with a lextree parser. Obviously the two necessitate wildly different code so we split this into two different classes, but to the API the filter is the same regardless of our implementation.
This is straightforward enough, but issues with Backbone come into how the classes are defined. We may want to make use of the built-in get/set functions. But this will dump properties into attributes, which affects the default toJSON which is also used to build the API representation of this filter, and whether this filter is equal to another filter of the same type.
With a Data Access pattern there'd be another layer that knew how to translate that filter to the API and vice versa. Any CRUD operation would get picked up by the specific DAO and processed by proxy.
Related
As I understand Entity caries fundamental properties, methods and validations. An User Entity would have name, DoB...email, verified, and what ever deemed 'core' to this project. For instance, one requires phone number while a complete different project would deem phone number as not necessary but mailing address is.
So then, moving out side of Entity layer, we have the Use-Case layer which depends on entity layer. I imagine this use-case as something slightly more flexible than entity. So this layer allows us to write additional logic and calculations while making use of existing entity or multiple entities.
But my first uncertainty lies whether use-case can create derived values(DoB => Age) from entity properties and even persist them in storage or does it need to already exist in User Entity class already ? Secondly, does use-case layer even NEED to use entity, could I have a use-case that literally just sums(a, b) and may or may not persist that in storage anyways? Thirdly, when persisting entity related properties into database, does retrieving them require once again validation, is this redundant and hurting performance?
Finally, the bigger quesetion is, what is use-case ? should use-case mean to be adaptable by being agnostic of where the inputs comes from and what it serves to? does this just mean the inversion dependency removes the responsibility of what framework it ties to. I.e. using express or Koa or plain http wouldn't force a rewrite of core logic . OR does it mean adaptable to something even greater? like whether it serves directly at the terminal-related applications or api request/response-like-applications via web server?
If its the latter, then it's confusing to me, because it has to be agnostic where it gets/goes, yet these outputs resembles the very medium they will deliver to. for instance, designing a restFUL api, a use case may be...
getUserPosts(userId, limit, offset). which will output a format best for web api consumers (that to me is the application logic right? for a specific application). And it's unlikely that I'll reuse the use-case getUserPost for a different requestor (some terminal interface that runs locally, which wants more detailed response), more or less. So to me i see it shines when the times comes to switch between application-specific framework like between express/koa/httprequest/connect for a restapi for the same application or node.js/bun environment to interact with the same terminal. Rather than all mightly one usecase that criss-cross any kind of application(webservice and terminal simultaneously or any other).
If it is allmighty, should use-case be designed with more generalized purpose? Make them take more configurable? like previously i could've add more parameters getUserPosts(userId, limit, offset, sideloadingConfig, expandedConfig, format: 'obj' | 'csv' | 'json' ), I suppose the forethought to anticipate different usage and scaling requires experience - unless this is where the open-close principle shines to make it ready to be expandable? Or is it just easier to make a separate use-case like getUserPostsWebServices and getUserPostsForLocal , getPremiumUsersPostsForWebServices - this makes sense to me, because now each use-case has its own constraints, it is not possible for WebServieces to reach anymore data fetch/manipulation than PostsForLocal or getPremiumUsersPostsForWebServices offers. and our reusability of WebServices does not tie to any webserver framework. I suppose this is where I would draw the line for use-case, but I'm inexperienced, and I don't know the answer to this.
I know this has been a regurgitation of my understanding rather than a concrete question, but it still points to the quesiton of what the boundary and defintion of use-case is in clean architecture. Thanks for reading, would anyone chime to clarify anything I said wrong?
But my first uncertainty lies whether use-case can create derived values(DoB => Age) from entity properties and even persist them in storage or does it need to already exist in User Entity class already ?
The age of a user is directly derived from the date of birth. I think the way it is calculated is the same between different applications. Thus it is an application agnostic logic and should be placed in the entity.
Defining a User.getAge method does not mean that it must be persisted. The entities in the clean architecture are business object that encapsulate application agnostic business rules.
The properties that are persisted is decided in the repository. Usually you only persist the basic properties, not derived. But if you need to query entities by derived properties they can be persisted too.
Persisting time dependent properties is a bit tricky, since they change as time goes by. E.g. if you persist a user's age and it is 17 at the time you persist it, it might be 18 a few days or eveh hours later. If you have a use case that searches for all users that are 18 to send them an email, you will not find all. Time dependent properties need a kind of heart beat use case that is triggered by a scheduler and just loads (streams) all entities and just persists them again. The repository will then persist the actual value of age and it can be found by other queries.
Secondly, does use-case layer even NEED to use entity, could I have a use-case that literally just sums(a, b) and may or may not persist that in storage anyways?
The use case layer usually uses entities. If you use case would be as simple as sum two numbers it must not use entities, but I guess this is a rare case.
Even very small use cases like sums(a, b) can require the use of entities, if there are rules on a and b. This can be very simple rules like a and b must be positive integer values. But even if there are no rules it can make sense to create entities, because if a and be are custom entities you can give them a name to emphasize that they belong to a critial business concept.
Thirdly, when persisting entity related properties into database, does retrieving them require once again validation, is this redundant and hurting performance?
Usually your application is the only client of the database. If so then your application ensures that only valid entities are stored to the database. Thus it is usually not required to validate them again.
Valid can be context dependent, e.g. if you have an entity named PostDraft it should be clear that a draft doesn't have the same validation rules then a PublishedPost.
Finally a note to the performance concerns. The first rule is measure don't guess. Write a simple test that creates, e.g. 1.000.000 entities, and validates them. Usually a database query and/or the network traffic, or I/O in common, are performance issues and not in memory computation. Of cource you can write code that uses weird loops that mess up performance, but often this is not the case.
Finally, the bigger quesetion is, what is use-case ? should use-case mean to be adaptable by being agnostic of where the inputs comes from and what it serves to? does this just mean the inversion dependency removes the responsibility of what framework it ties to. I.e. using express or Koa or plain http wouldn't force a rewrite of core logic . OR does it mean adaptable to something even greater? like whether it serves directly at the terminal-related applications or api request/response-like-applications via web server?
A use case is is an application dependent business logic. There are different reasons why the clean architecture (and also others like the hexagonal) make them independent of the I/O mechanism. One is that it is independent of frameworks. This makes them easy to test. If a use case would depend on an http controller or better said you implemented the use case in an http controller, e.g. a rest controller, it means that you need to start up an http server, open a socket, write the http request, read the http response, extract the data you need to test it. Even there are frameworks and tools that make such an test easy, these tools must finally start a server and this takes time. Tests that are slow are not executed often, are they? And tests are the basis for refactoring. If you don't have tests or the tests ran slow you do not execute them. If you do not execute them you do not refactor. So the code must rot.
In my opinion the testability is most import and decoupling use cases from any details, like uncle bob names the outer layers, increases the testability of use cases. Use cases are the heart of an application. That's why they should be easy testable and be protected from any dependency to details so that they do not need to be touched in case of a detail change.
If it is allmighty, should use-case be designed with more generalized purpose? Make them take more configurable? like previously i could've add more parameters getUserPosts(userId, limit, offset, sideloadingConfig, expandedConfig, format: 'obj' | 'csv' | 'json' )
I don't think so. Especially the sideloadingConfig, format like json or csv are not parameters for a use case. These parameters belong to a specific kind of frontend or better said to a specific kind of controllers. A use-case provides the raw business data. It is the responsibility of a controller or better a presenter to format them.
Coming from a .Net/C# background, a common pattern there is inversion of control/dependency injection, where we design an interface and then one or more classes that implement that interface. The various other classes that make up the app take in the interfaces that they need to do their job, and make use of the provided methods without worrying about internal implementation. It's then up to the app configuration to determine which one of the interface implementations should be used. This allows the same interface for things like CRUD operations on an API vs. a database, a file, etc.
To give a concrete example of what I'm trying to achieve in the Next.js world:
Let's say I have a todo app with a useTodos() custom hook that returns todos, and an api/getTodos.ts endpoint that also returns todos. Both the hook and the endpoint have in common that they both "get todos" from some data source and return them.
I want the ability to provide a common interface for getTodos() and provide several different implementations for it. And then a central point where I can control which part of the app uses which implementation. For example getTodos() could have the following implementations:
Calls an API
Query Supabase
Query MySQL
Query local storage
Read from a local file
I can then centrally define that the useTodos() hook uses the getTodos() that calls an API, and the api/getTodos.ts uses the getTodos() that queries Supabase. When we decide Supabase is no longer for us and we move to MySQL, I don't need to hunt down all the places where I'm fetching data, instead I just change my central configuration to use the new MySQL implementation.
I'm not looking for a 1:1 imitation of .Net IoC patterns using some obscure library but rather for how this sort of stuff is routinely done in the JS/TS/React/Next.js world. I'm also using 100% functional programming so I'm not looking for solutions that involve classes.
Thank you for reading this far!
I'm working on a project that brings in a ton of data from one endpoint into a a single reducer. I'd like to convert that data in ES6 Classes, so I can give them helper method, provide relations between the data, and not have to work with plain javascript objects all the time. Also, to get relations between the data, I'm having to do n-squared computations and that's slowing down the frontend.
Here are the options I'm seeing:
1) Create a selector that connects with the redux store. This selector could get the data from the reducer and convert it into multiple ES6 classes that I've defined. If the reducer gets new data that is different, then the selector will recreate the ES6 class instantiations.
2) https://github.com/tommikaikkonen/redux-orm
This seems fantastic as well.
3) Create multiple selectors on the data set to that will compute a specified relation in the data set, so I can just call that selector each time I want to get a relation that would otherwise be an n-squared computation to get.
My question is which route of the three should I take? Is there an alternative besides these 3? Or do people just work with javascript objects mostly on the frontend and not deal with ES6 classes.
Update:
Two months later, and I'm still using Redux-ORM in production and it is fantastic! Highly recommend.
It's certainly entirely possible to do all that handling with "plain" functions and selectors. There's info on normalization in the Redux FAQ, and I have some articles on selectors and normalization as part of my React/Redux links list.
That said, I am a huge proponent of Redux-ORM. It's a fantastic tool for helping manage normalized/relational data in your Redux store. I use it for normalizing nested data, querying data, and updating that data immutably.
My Practical Redux blog post series includes two articles talking about Redux-ORM specifically: Redux-ORM Basics and Redux-ORM Concepts and Techniques. The latest post, Practical Redux Part 5: Loading and Displaying Data, shows Redux-ORM in action as well.
The author of Redux-ORM, Tommi Kaikkonen, actually just put up a beta of a major update to Redux-ORM that improves the API and behavior, which I'm looking forward to playing with.
I definitely recommend it!
This question is regarding a best practice for structuring data objects using JS Modules on the server for consumption by another module.
We have many modules for a web application, like login view, form handlers, that contain disparate fragments of data, like user state, application state, etc. which we need to be send to an analytics suite that requires a specific object format. Where should we map the data? (Pick things we want to send, rename keys, delete unwanted values.)
In each module. E.g.: Login knows about analytics and its formatting
requirements.
In an analytics module. Now analytics has to know
about each and every module's source format.
In separate [module]-analytics modules. Then we'll have dozen of files which don't have much context to debug and understand.
My team is split on what is the right design. I'm curious if there is some authoritative voice on the subject that can help us settle this.
Thanks in advance!
For example,
var objectForAnalytics = {
logged_in: user.get('isLoggedIn'),
app_context: application.get('environment')
};
analytics.send(objectForAnalytics);
This short sample script uses functions from 3 modules. Where should it exist in a well-organized app?
JS doesn't do marshaling, in the traditional sense.
Since the language encourages duck typing and runs all loaded modules in a single VM, each module can simply pass the data and let a consumer choose the fields that interest them.
Marshaling has two primary purposes, historically:
Delivering another module the data that it expects, since C-style structures and objects do not support extra data (usually).
Transferring data between two languages or modules built on two compilers, which may be using different memory layouts, calling conventions, or ABIs.
JavaScript solves the second problem using JSON, but the first is inherently solved with dictionary-style objects. Passing an object with 1000 keys is just as fast as an object with 2, so you can (and often are encouraged) to simply give the consumer what you have and allow them to decide what they need.
This is further reinforced in languages like Typescript, where the contract on a parameter type is simply a minimum set of requires. TS allows you to pass an object that exceeds those requires (by having other fields), instead only verifying that you are aware of what the consumer states they require in their contract and have met that.
When you do need to transform an object, perhaps because two library use the same data with different keys, creating a new object with shallow references is pretty easy:
let foo = {
bar: old.baz,
baz: old.bin
};
This does not change or copy the underlying data, so any changes to the original (or the copy) will propagate to the other. This does not include primitive values, which are immutable and so will not propagate.
I've started to wrap my functions inside of Objects, e.g.:
var Search = {
carSearch: function(color) {
},
peopleSearch: function(name) {
},
...
}
This helps a lot with readability, but I continue to have issues with reusabilty. To be more specific, the difficulty is in two areas:
Receiving parameters. A lot of times I will have a search screen with multiple input fields and a button that calls the javascript search function. I have to either put a bunch of code in the onclick of the button to retrieve and then martial the values from the input fields into the function call, or I have to hardcode the HTML input field names/IDs so that I can subsequently retrieve them with Javascript. The solution I've settled on for this is to pass the field names/IDs into the function, which it then uses to retrieve the values from the input fields. This is simple but really seems improper.
Returning values. The effect of most Javascript calls tends to be one in which some visual on the screen changes directly, or as a result of another action performed in the call. Reusability is toast when I put these screen-altering effects at the end of a function. For example, after a search is completed I need to display the results on the screen.
How do others handle these issues? Putting my thinking cap on leads me to believe that I need to have an page-specific layer of Javascript between each use in my application and the generic methods I create which are to be used application-wide. Using the previous example, I would have a search button whose onclick calls a myPageSpecificSearchFunction, in which the search field IDs/names are hardcoded, which marshals the parameters and calls the generic search function. The generic function would return data/objects/variables only, and would not directly read from or make any changes to the DOM. The page-specific search function would then receive this data back and alter the DOM appropriately.
Am I on the right path or is there a better pattern to handle the reuse of Javascript objects/methods?
Basic Pattern
In terms of your basic pattern, can I suggest modifying your structure to use the module pattern and named functions:
var Search = (function(){
var pubs = {};
pubs.carSearch = carSearch;
function carSearch(color) {
}
pubs.peopleSearch = peopleSearch;
function peopleSearch(name) {
}
return pubs;
})();
Yes, that looks more complicated, but that's partially because there's no helper function involved. Note that now, every function has a name (your previous functions were anonymous; the properties they were bound to had names, but the functions didn't, which has implications in terms of the display of the call stack in debuggers and such). Using the module pattern also gives you the ability to have completely private functions that only the functions within your Search object can access. (Just declare the functions within the big anonymous function and don't add them to pubs.) More on my rationale for that (with advantages and disadvantages, and why you can't combine the function declaration and property assignment) here.
Retrieving Parameters
One of the functions I really, really like from Prototype is the Form#serialize function, which walks through the form elements and builds a plain object with a property for each field based on the field's name. (Prototype's current – 1.6.1 – implementation has an issue where it doesn't preserve the order of the fields, but it's surprising how rarely that's a problem.) It sounds like you would be well-served by such a thing and they're not hard to build; then your business logic is dealing with objects with properties named according to what they're related to, and has no knowledge of the actual form itself.
Returning Values / Mixing UI and Logic
I tend to think of applications as objects and the connections and interactions between them. So I tend to create:
Objects representing the business model and such, irrespective of interface (although, of course, the business model is almost certainly partially driven by the interface). Those objects are defined in one place, but used both client- and server-side (yes, I use JavaScript server-side), and designed with serialization (via JSON, in my case) in mind so I can send them back and forth easily.
Objects server-side that know how to use those to update the underlying store (since I tend to work on projects with an underlying store), and
Objects client-side that know how to use that information to render to the UI.
(I know, hardly original!) I try to keep the store and rendering objects generic so they mostly work by looking at the public properties of the business objects (which is pretty much all of the properties; I don't use the patterns like Crockford's that let you really hide data, I find them too expensive). Pragmatism means sometimes the store or rendering objects just have to know what they're dealing with, specifically, but I do try to keep things generic where I can.
I started out using the module pattern, but then started doing everything in jQuery plugins. The plugins allow to pass page specific options.
Using jQuery would also let you rethink the way you identify your search terms and find their values. You might consider adding a class to every input, and use that class to avoid specifically naming each input.
Javascript is ridiculously flexible which means that your design is especially important as you can do things in many different ways. This is what probably makes Javascript feel less like lending itself to re-usability.
There are a few different notations for declaring your objects (functions/classes) and then namespacing them. It's important to understand these differences. As mentioned in a comment on here 'namespacing is a breeze' - and is a good place to start.
I wouldn't be able to go far enough in this reply and would only be paraphrasing, so I recommend buying these books:
Pro JavaScript Design Patterns
Pro Javascript techniques