I am developing a node js project in which the end-user can create and update outlets in mongoDb database. Now, let's say there are hundreds of fields in the outlet schema but the user may not update each and every field, i.e one may update 2 fields, other may update 3 fields. So, I want to create a function that can handle each type of request. I don't know this can be done or there is some other way, but i am new to this, can you suggest me something suitable for my project. Thanks in advance!
Sorry for the confusion earlier.
I am developing a project in nodejs for retail outlets. Each outlet has over 50 fields in the database while registering. Registration is fine. The POST request via API specifies all the data required.
But when I am planning to update any of those field. I am not sure what approach to handle. Sometimes only 1 field will be changed or next time a bunch of them with no order/sequence.
Example :
{"id":"ABCD", "name":"MNOPQR"}
{"id":"ABCD", "time":123, "name":"ZYX"}
So here in first query I need to only update the name while in next I need to update both name and time.
Is there any way I can manage the dynamic json parsing at server end and updating only those fields (int database) that are mentioned in the request.
You have several approaches you can use.
One is to accept an object with the changes:
function doTheChanges(changes) {
Object.keys(changes).forEach(name => {
const value = changes[name];
// Use `name` (the field name) and `value` (the value) to do the update
});
}
Called like this:
doTheChanges({foo: "bar", biz: "baz"});
...to change the foo field to "bar" and the biz field to "baz". (Names that have invalid identifier chars, etc., can be put in double quotse: {"I'm an interesting field name": "bar"}.)
Alternately you could accept an array of changes with name and value prperties:
function doTheChanges(changes) {
changes.forEach(({name, value}) => {
// Use `name` (the field name) and `value` (the value) to do the update
});
}
Called like this:
doTheChanges([
{
name: "foo",
value: "bar"
},
{
name: "biz",
value: "baz"
}
]);
You could also just accept a varying number of parameters, where each parameter is an object with name and value properties:
function doTheChanges(...changes) {
changes.forEach(({name, value}) => {
// Use `name` (the field name) and `value` (the value) to do the update
});
}
Called like this:
doTheChanges(
{
name: "foo",
value: "bar"
},
{
name: "biz",
value: "baz"
}
);
Note that that's very similar to the array option.
Or use the builder pattern, but it's probably overkill here.
Instead of put request you can use patch request so that only the values you change get affected in database .
Related
my problem is to update a specific field of the document.
Let's imagine we have multiple input fields when I am just changing one of the field of the document the rest of them going to be null since I am just updating one of them. Is there any simple way to update one field and the rest field not changed.
I can give a switch case or if-else but I do not think it is an
appropriate way to solve this kind issue.
updateChr:async ( args: any) => {
try {
const data = await Client.findByIdAndUpdate(args.clientId,
{$set:
{
chronology:{
status: args.clientInput.status,
note:args.clientInput.note,
date: args.clientInput.date,
file:{
filename:args.clientInput.file.filename,
data:args.clientInput.file.data,
type:args.clientInput.file.type,
size:args.clientInput.file.size,
}
}
},
}
,{useFindAndModify: false})
return transformClient(data);
} catch (err) {
throw err;
}
},
So this part right here is what you need to pay attention to:
{$set:
{
chronology:{
status: args.clientInput.status,
note:args.clientInput.note,
date: args.clientInput.date,
file:{
filename:args.clientInput.file.filename,
data:args.clientInput.file.data,
type:args.clientInput.file.type,
size:args.clientInput.file.size,
}
}
},
}
What you're telling mongo is that, no matter what's in those fields now (or even if they don't currently exist), to set their new values to what you are providing. If you're providing a null value, then it will set its new value to null. If you aren't wanting to update things if their new value is null, then you'll need to clean up your data prior to pushing it to mongo. The easiest way to do this is to sanitize the data prior to adding it to the object you're referencing above. Once you've only got the data that actually exists there, then just loop over the data to create the update object for mongo rather than statically setting it like it is here.
Alternatively, if the data that actually should exist in those fields is available at this stage, you could loop over the null values and fill them in with the current values in mongo or whatever they should be.
I'm using Angular 5.
I have "fake back-end" (array of items).
My case:
I'm waiting for the following structure of object:
id: number,
title: string
But, Back-End sends me wrong structure:
id: number,
name: string.
I need to receive data from back-end, and if field name (in my case "name" is wrong, should be "title") is wrong, I should RENAME field and return valid object.
P.S. I have interface and class
Good practice on large apps where you don't have much control over the backend is to create a mapper for each response type you expect.
For example you make an http request to retrieve a list of cars from your backend.
When you retrieve the response, you pass the data to a specific mapping function.
class CarMapper {
// map API to APP
public serverModelToClientModel(apiModel: CarApiModel): CarAppModel {
const appModel = new CarAppModel(); // your Car constructor
// map each property
appModel.id = apiModel.id_server;
appModel.name = apiModel.title;
return appModel; // and return YOUR model
}
}
This way on the client side you always have the correct data model. And you adapt on any model change made on the backend.
You can check if an object has name key and then create another object with title
if (obj.hasOwnProperty("name")){
var newObj = {
id: obj.id,
title: obj.name
};
}
obj = newObj;
Background Information
In short i'm looking to achieve "mostly" whats shown here ...
http://demos.telerik.com/kendo-ui/treelist/remote-data-binding
... except it's a bit of a mind bender and the data comes from more than base endpoint url in my case.
I am trying to build a generic query building page that allows users to pick a context, then a "type" (or endpoint) and then from there build a custom query on that endpoint.
I have managed to get to the point where I do this for a simple query but now i'm trying to handle more complex scenarios where I retrieve child,or deeper data items from the endpoint in question.
With this in mind ...
The concept
I have many endpoints not all of which OData but follow mostly OData v4 rules, and so I am trying to build a "TreeGrid" based having selected an endpoint that will expose the expansion options available to the query.
All my endpoints have a custom function on it called GetMetadata() which describes the type information for that endpoint, where an endpoint is for the most part basically a REST CRUD<T> implementation which may or may not have some further custom functions on it to handle a few other business scenarios.
So, given a HTTP get request to something like ...
~/SomeContext/SomeType/GetMetadata()
... I would get back an object that looks much like an MVC / WebAPI Metadata container.
that object has a property called "Properties" some of which are scalar and some of which are complex (as defined in the data).
I am trying to build a TreeListDataSource or a HierarchicalDataSource object that I can use to bind to the Kendo treeList control for only the complex properties, that dynamically builds the right get url for the meta and lists out the complex properties for that type based on the property information from the parent type with the root endpoint being defined in other controls on the page.
The Problem
I can't seem to figure out how to configure the kendo datasource object for the TreeGrid to get the desired output, I think for possibly one of two reasons ...
The TreeListDataSource object as per the demo shown here: http://demos.telerik.com/kendo-ui/treelist/local-data-binding seems to imply that the hierarchy based control wants a flat data source.
I can't figure out how to configure the datasource in such a way that I could pass in the parent meta information (data item from the source) in order to build the right endpoint url for the get request.
function getDatasource(rootEndpoint) {
return {
pageSize: 100,
filter: { logic: 'and', filters: [{ /* TODO:possibly filter properties in here? */ }] },
type: 'json',
transport: {
read: {
url: function (data) {
//TODO: figure out how to set this based on parent
var result = my.api.rootUrl + endpoint + "/GetMetadata()";
return result;
},
dataType: 'json',
beforeSend: my.api.beforeSend
}
},
schema: {
model: {
id: 'Name',
fields: {
Type: { field: 'Type', type: 'string' },
Template: { field: 'Template', type: 'string' },
DisplayName: { field: 'DisplayName', type: 'string' },
ShortDisplayName: { field: 'ShortDisplayName', type: 'string' },
Description: { field: 'Description', type: 'string' },
ServerType: { field: 'ServerType', type: 'string' }
}
}
parse: function (data) {
// the object "data" passed in here will be a meta container, a single object that contains a property array.
$.each(data.Properties, function (idx, item) {
item.ParentType = data;
item.Parent = ??? where do I get this ???
});
return data.Properties;
}
}
};
}
Some of my problem may be down to the fact that metadata inherently doesn't have primary keys, I wondered if perhaps using parse to attach a generated guid as the key might be an idea, but then I think Kendo uses the id for the question on the API when asking for children.
So it turns out that kendo is just not geared up to do anything more than serve up data from a single endpoint and the kind of thing i'm doing here is a little bit more complex than that, and further more due to the data being "not entity type data" I don't have common things like keys and foreign keys.
With that in mind I chose take the problem away from kendo altogether and simply handle the situation with a bit of a "hack that behaves like a normal kendo expand but not really" ...
In a treegrid, when kendo shows an expandable row it renders something like this in the first cell ...
With no expanded data or a data source that is bound to a server this cell is not rendered.
so I faked it in place and added an extra class to my version .not-loaded.
that meant I could hook up a custom block of js on click of my "fake expand", to build the right url, do my custom stuff, fake / create some id's, then hand the data to the data source.
expandList.on('click', '.k-i-expand.not-loaded', function (e) {
var source = expandList.data("kendoTreeList");
var cell = $(e.currentTarget).closest('td');
var selectedItem = source.dataItem($(e.currentTarget).closest('tr'));
my.type.get(selectedItem.ServerType, ctxList.val(), function (meta) {
var newData = JSLINQ(meta.Properties)
.Select(function (i) {
i.id = selectedItem.id + "/" + i.Name;
i.parentId = selectedItem.id;
i.Selected = my.type.ofProperty.isScalar(i);
i.TemplateSource = buildDefaultTemplateSourceFor(i);
return i;
})
.ToArray();
for (var i in newData) {
source.dataSource.add(newData[i]);
}
$(e.currentTarget).remove();
source.expand(selectedItem);
buildFilterGrid();
generate();
});
});
This way, kendo gets given what it epects for a for a treeviewlist "a flat set with parent child relationships" and I do all the heavy lifting.
I used a bit of JSLINQ magic to make the heavy lifting a bit more "c# like" (i prefer c# after all), but in short all it does is grab the parent item that was expanded on and uses the id from that as the parent then generates a new id for the current item as parent.id + "/" + current.name, this way everything is unique as 2 properties on an object can't have the same name, and where two objects are referenced by the same parent the prefix of the parents property name makes the reference unique.
It's not the ideal solution, but this is how things go with telerik, a hack here, a hack there and usually it's possible to make it work!
Something tells me there's a smarter way to do this though!
I'm creating a record that should have a reference to another record.
I already created a record that has for RecordName France and for record type Countries. The record I now want to create looks like this:
var operations = container.publicCloudDatabase.newRecordsBatch(); // I'm normally creating many cities at once, newRecordsBatch() also works with only one record.
operations.create({
recordName: 'Paris'
recordType: 'Cities',
fields: {
Country: {
value: 'France'
}
}
});
operations.commit().then(function(response) {
if(response.hasErrors) {
console.log(response.errors[0]);
}
});
In the CloudKit Dashboard I have set that Cities to have one reference to Countries using the field Country. However when running the code it returns the server responded with a status of 400 (Bad Request).
I watched the WWDC video and the only thing Apple says about references in CloudKit JS is use a Reference object. I don't know what it is, I guess it's a JSON object but does someone know what are the key/values of this object?
A Reference object is a dictionary with the following keys:
recordName: The unique name used to identify the record within a zone. Required.
zoneID: Dictionary that identifies a record zone in the database.
action: The delete action for the reference object. NONE or DELETE_SELF or VALIDATE. Required.
Example of a good syntax for the Country field:
Country: {
value: {
recordName: 'France',
action: 'DELETE_SELF'
}
}
More info available in the documentation, pages 68-69.
I have read and read the docs on these two methods, but for the life of me cannot work out why you might use one over the other?
Could someone just give me a basic code situation where one would be application and the other wouldn't.
reset sets the collection with an array of models that you specify:
collection.reset( [ { name: "model1" }, { name: "model2" } ] );
fetch retrieves the collection data from the server, using the URL you've specified for the collection.
collection.fetch( { url: someUrl, success: function(collection) {
// collection has values from someUrl
} } );
Here's a Fiddle illustrating the difference.
We're assuming here that you've read the documentation, else it'l be a little confusing here.
If you look at documentation of fetch and reset, what it says is, suppose you have specified the url property of the collection - which might be pointing to some server code, and should return a json array of models, and you want the collection to be filled with the models being returned, you will use fetch.
For example you have the following json being returned from the server on the collection url:
[{
id : 1,
name : "a"
}, {
id : 2,
name : "b"
}, {
id : 3,
name : "c"
}]
Which will create 3 models in your collection after successful fetch. If you hunt for the code of collection fetch here you will see that fetch will get the response and internally will call either reset or add based on options specified.
So, coming back to discussion, reset assumes that we already have json of models, which we want to be stored in collection, we will pass it as a parameter to it. In your life, ever if you want to update the collection and you already have the models on client side, then you don't need to use fetch, reset will do your job.
Hence, if you want to the same json to be filled in the collection with the help of reset you can do something like this:
var _self = this;
$.getJSON("url", function(response) {
_self.reset(response); // assuming response returns the same json as above
});
Well, this is not a practice to be followed, for this scenario fetch is better, its just used for example.
Another example of reset is on the documentation page.
Hope it gives a little bit of idea and makes your life better :)
reset() is used for replacing collection with new array. For example:
#collection.reset(#full_collection.models)
would load #full_collections models, however
#collection.reset()
would return empty collection.
And fetch() function returns default collection of model