breeze.js : how do I exclude tables/entities from breeze metadata collection? - javascript

I have a Single-Page application using Entity Framework on the backend and Breeze.js on the client. I'm also using the breeze .Net EF classes. In order for the breeze client to create breeze entities on the client, it calls a controller method named "MetaData". This method returns a collection of all the entities in the database, even those that may not be used on the client.
Even though there is no coding logic in the metadata, it does contain a complete schema of the database. Some of these entities are used for security and business logic, and I wish to not have this entire structure open to the world.
Is there a way to exclude entities (not just individual properties) from the breezejs metadata collection?
Thanks

The easy way is to create a DbContext that has only those classes and relationships that you want to expose. Use the fluent interface to shrink it down and cauterize relationships that you don't want.
Then create an instance of an EfContextProvider based on this limited DbContext.
You can use this cut-down DbContext exclusively for metadata generation if you wish. You can switch to something more robust (wrapped in a different EfContextProvider) if you must.
See the documentation chapter "EF as a Design Tool".

Related

Best practice for using different implementations of data fetching code in React/Next.js app?

Coming from a .Net/C# background, a common pattern there is inversion of control/dependency injection, where we design an interface and then one or more classes that implement that interface. The various other classes that make up the app take in the interfaces that they need to do their job, and make use of the provided methods without worrying about internal implementation. It's then up to the app configuration to determine which one of the interface implementations should be used. This allows the same interface for things like CRUD operations on an API vs. a database, a file, etc.
To give a concrete example of what I'm trying to achieve in the Next.js world:
Let's say I have a todo app with a useTodos() custom hook that returns todos, and an api/getTodos.ts endpoint that also returns todos. Both the hook and the endpoint have in common that they both "get todos" from some data source and return them.
I want the ability to provide a common interface for getTodos() and provide several different implementations for it. And then a central point where I can control which part of the app uses which implementation. For example getTodos() could have the following implementations:
Calls an API
Query Supabase
Query MySQL
Query local storage
Read from a local file
I can then centrally define that the useTodos() hook uses the getTodos() that calls an API, and the api/getTodos.ts uses the getTodos() that queries Supabase. When we decide Supabase is no longer for us and we move to MySQL, I don't need to hunt down all the places where I'm fetching data, instead I just change my central configuration to use the new MySQL implementation.
I'm not looking for a 1:1 imitation of .Net IoC patterns using some obscure library but rather for how this sort of stuff is routinely done in the JS/TS/React/Next.js world. I'm also using 100% functional programming so I'm not looking for solutions that involve classes.
Thank you for reading this far!

Detecting where the model is used between c # and react

I create app using C# RestApi as backend and React as frontend.
I would like to ask you whether you inventory the use of backend models in the front.
In VS (C#) I can use F12 to detect reference. I have no idea how to do this in REST where JS does not need to recreate the class.
The backend developer may not be able to access the frontend code. Do you have any practice on how to record the use of the model on the REST/React so that the developer can quickly see if he can change the appearance of the backend model class.
Well, there is no strict connection between models defined in the backend and frontend in this case.
The connection is just made when a client calls an endpoint on a server.
The most common practice is to use DTO as returned data(Data Transfer Object) with all properties which you need on the front, not models.
So on the frontend, you create the same class as DTO on the backend and you just use some libs to support the process of mapping like the automapper.
In this case detection of using models needs to be found by URL.

mcv application where to adapt model data for the view?

I'm implementing a fairly simple application in javascript using the MVC approach. My views are using mustache as templating system.
When the application loads an api gets queried and it returns a complex object which I store in the model. When it's time to visualise the data on the view I'd need to transform this complex object in a much simpler version with less properties and nesting, in order for the template engine to be able to display the view.
I'm wondering if it's controller responsability to "adapt" the data for the view or this process should be delegated to some other part of the application.
I use Automapper to do convert entity framework models to simpler Viewmodels/DTO objects. It works by convention and when the convention doesn't work, you use a fluent API to tell it how to convert the properties.
Very simple to use and you only need to define your mapping logic once, which is exactly what you want.
Maybe you should create DTO object and map that object to you ViewModel

Where to format collections / objects

From front end architectural point of view, what is the most common way to store scripts that perform transformations on collections of objects/models? In what folder would you store it, and what would you name the file / function?
Currently I have models, views, controllers, repositories, presenters, components and services. Where would you expect it?
As a component (what would you name it?)? As a service? Currently I use services to make the connection between the presenter and the repository to handle data interactions with the server.
Should I call it a formatter? A transformer? If there is a common way to do, I'd like to know about it.
[...] models, views, controllers, repositories, presenters, components and services. Where would you expect it?
services, mos def. This is a interception service for parsing data.
Should I call it a formatter? A transformer?
Well, trasformer (or data transformer) is actually quite good IMO. data interceptor also comes to mind, and data parser, obviously.
If there is a common way to do, I'd like to know about it.
Yes, there is! Override the model's / collection's parse() function to transform the data fetched from the server into your preferred data structure.
Note that you should pass {parse: true} in the options to make it work.
This, of course, does not contradict using the services you wrote from within that function. You can encapsulate the parsing logic in those scripts, and reuse it anywhere you'd like.
Bare in mind that there will probably be very little code reuse when using parse(), as each transformation will relate to a single model or collection.

Backbone design patterns similar to DAOs

Overall I'm happy with using Backbone.js for my company's frontend application. However I've noticed a lot of foundational problems that I wonder if anyone else encountered.
The biggest issue is that the frontend team does not control the API that powers our application. The objects passed are fairly complex in structure. Nested arrays, sub objects, etc.... This of itself is expected. The API serves a different purpose than the frontend. What each considers an "object" are completely different things.
In practice this leads to issues. Namely, one API endpoint may be broken into multiple frontend models. This is a common problem when dealing with APIs. It's typically addressed through Data Access Objects or a Data Access Layers that translate API objects into internal objects. Backbone by contrast expects models to be tightly coupled with the API endpoints. Sync operations on a model (i.e. save, fetch) immediately reach out to the API.
Adding to the issues, I seriously believe that toJSON in Backbone does too much. It's used to reproduce the model in a format that can be consumed internally; defines how the model should get posted to an API; and used for equality checks between models for many internal operations. Any of the three could get broken out into their own method.
Has anyone else dealt with this? What strategies did you use? Implement DAOs? Is there a fork of Backbone that accounts for these issues?
[edit]
A closer to real world case that I've encountered. When we query the API for results, there's about a hundred filters we can pass along with the request. The overall filter structure is pretty simple on that end an array of filter objects like so:
{
filterName: '',
// '~', '=', '<=', '>='
operator: '',
value: ''
}
For one particularly problematic filter, depending on the 'operator' we either allow the user to select a single option or we allow the user to construct a "sentence" from the available options. The former option renders as an HTML select, the latter we implemented with a lextree parser. Obviously the two necessitate wildly different code so we split this into two different classes, but to the API the filter is the same regardless of our implementation.
This is straightforward enough, but issues with Backbone come into how the classes are defined. We may want to make use of the built-in get/set functions. But this will dump properties into attributes, which affects the default toJSON which is also used to build the API representation of this filter, and whether this filter is equal to another filter of the same type.
With a Data Access pattern there'd be another layer that knew how to translate that filter to the API and vice versa. Any CRUD operation would get picked up by the specific DAO and processed by proxy.

Categories