I built a tree consisting of a set of collections. Each collection is in turn made of other sub-collections and so on.
Every collection has the same field (which is the one I want to update).
Is there a way to update every level of the tree with the same value (in order to initialize it)?
Some images of the tree's structure:
You can use batched writes to perform multiple individual writes at once, but there is no way to update a field across a collection of documents in a way similar to e.g. an UPDATE in SQL databases.
You would need to enumerate each document that needs to be updated and perform an update on each of those documents.
Related
In a state-managing javascript framework (eg: React), if you have a collection of objects to store in state, which is the more useful and/or performant dataset type to hold them all, an object or an array? Here are a few of the differences I can think of that might come up in using them in state:
Referencing entries:
With objects you can reference an entry directly by its key, whereas with an array you would have to use a function like dataset.find(). The performance difference might be negligible when doing a single lookup on a small dataset, but I imagine it gets larger if the find function has to pore over a large set, or if you need to reference many entries at once.
Updating dataset:
With objects you can add new entries with {...dataset, [newId]: newEntry}, edit old entries with {...dataset, [id]: alteredEntry} and even edit multiple entries in one swoop with {...dataset, [id1]: alteredEntry1, [id2]: alteredEntry2}. Whereas with arrays, adding is easy [...dataset, newEntry1, newEntry2], but to edit you have to use find(), and then probably write a few lines of code cloning the dataset and/or the entry for immutability's sake. And then for editing multiple entries it's going to either require a loop of find() functions (which sounds bad for large lists) or use filter() and then have to deal with adding them back into the dataset afterwards.
Deleting
To delete a single entry from the object dataset you would do delete dataset[id] and for multiple entries you would either use a loop, or a lodash function like _.omit(). To remove entries from an array (and keep it dense) you'd have to either use findIndex() and then .slice(index, 1), or just use filter() which would work nicely for single or multiple deletes. I'm not sure about the performance implications of any of these options.
Looping/Rendering: For an array you can use dataset.map() or even easily render a specialized set on the fly with dataset.filter() or dataset.sort(). For the object to render in React you would have to use Object.values(dataset) before running one of the other iteration functions on it, which I suppose might create a performance hit depending on dataset size.
Are there any points I'm missing here? Does the usability of either one depend perhaps on how large the dataset is, or possibly how frequent the need to use "look up" operations are? Just trying to pin down what circumstances might dictate the superiority of one or the other.
There's no one real answer, the only valid answer is It dependsTM.
Though there are different use-cases that requires different solutions. It all boils down to how the data is going to be used.
A single array of objects
Best used when the order matters and when it's likely rendered as a whole list, where each item is passed from the list looping directly and where items are rarely accessed individually.
This is the quickest (least developer-time consuming) way of storing received data, if the data is already using this structure to begin with, which is often the case.
Pros of array state
Items order can be tracked easily,
Easy looping, where the individual items are passed down from the list.
It's often the original structure returned from API endpoints,
Cons of an array state
Updating an item would trigger a render of the full list.
Needs a little more code to find/edit individual items.
A single object by id
Best used when the order doesn't matter, and it's mostly used to render individual items, like on an edit item page. It's a step in the direction of a normalized state, explained in the next section.
Pros of an object state
Quick and easy to access/update by id
Cons of an object state
Can't re-order items easily
Looping requires an extra step (e.g. Object.keys().map)
Updating an item would trigger a render of the full list,
Likely needs to be parsed into the target state object structure
Normalized state
Implemented using both an object of all items by id, and an array of all the id strings.
{
items: {
byId: { /**/ },
allIds: ['abc123', 'zxy456', /* etc. */],
}
}
This becomes necessary when:
all use-cases are equally likely,
performance is a concern (e.g. huge list),
The data is nested a lot and/or duplicated at different levels,
re-rendering the list as undesirable side-effects
An example of an undesirable side-effect: updating an item, which triggers a full list re-render, could lose a modal open/close state.
Pros
Items order can be tracked,
Referencing individual items is quick,
Updating an item:
Requires minimal code
Doesn't trigger a full list render since the full list loops over allIds strings,
Changing the order is quick and clear, with minimal rendering,
Adding an item is simple but requires adding it in both dataset
Avoids duplicated objects in nested data structures
Cons
Individual removal is the worse case scenario, while not a huge deal either.
A little more code needed to manage the state overall.
Might be confusing to keep both state dataset in sync.
This approach is a common normalization process used in a lot of places, here's additional references:
Redux's state normalization is a strongly recommended best practice,
The normalizr lib.
I'm creating a simple task list web app using firebase firestore and Vue.js. All seem ok until I tried to do the task reordering feature, which turned the project into a nightmare.
I can easily implement reorder functionality on the app side by using the Vue Draggable library, that is based on the famous sortable.js library. Basically, I have a v-for code that iterates through my tasks, something like this:
<draggable v-model="tasks">
<div v-for="task in tasks" :key="task.id">{{task.title}}</div>
</draggable>
Note that this is wrapped with a draggable component that reorders the array whenever I drag elements, so that my tasks model array is dynamically reordered automatically.
All good so far, but now I'm trying to sync this reordering with Firebase Firestore, but, even though it allows me to push or remove elements into the storage array (https://firebase.google.com/docs/reference/js/firebase.firestore.FieldValue#static-arrayunion), it does not allow me to insert at specific indexes, so I can't save the reordering.
How should I approach this?
If you want to modify Firestore list field elements by index in a way that's not supported by arrayUnion or arrayRemove, you will have to:
Read the document
Modify the the array field in memory
Update the field back to the document with the new array contents
I'm currently playing about with node.js and am looking to find a neater method for rendering a list based on an array returned to the client through an event.
In my sample application, the node server emits a 'details-changed' event which passes a simple array. At the UI end, I consume that event and render a list item for each of the array elements.
At present, I am deleting all list items and recreating them all based on the returned array. However, I would like to know if there is a more efficient method or pattern where existing items remain and only new items created and missing items removed.
Use ReactJS for handling UI updates based on the data changes from server side. It would be the best approach for manipulating the DOM updates efficiently and neatly.
I use Backbone.js and I have a collections of models. This collection is retrieved and displayed on the front-end. On the front-end, I want the user to remove and add new models to the collection.
When the user is finished and he clicked "save", I want the entire collection to be updated. Meaning that when clicking 'save', the collection is synced (somehow). Added models are saved and removed models are deleted.
If I manipulate the collection by removing and adding models, and then use ex:
this.collection.sync()
Will it remove and add models?
There are at least 2 ways to achieve this.
Make an API endpoint to manage the models
When adding/updating a model, save the model directly with .save and when a model is removed, call .destroy on it.
The collection also has a .create function which adds the new model to it and saves it at the same time.
Best thing to do everything in one request.
Not always. The collection could be big and the changes rather small, so exchanging 100 objects with the server each time instead of X small requests to add, delete or update a model within the list.
Pros
Reusable endpoints to manage individual models
Light data transfer, faster than sending the whole collection (only changes are sent through partial updates)
Possible custom behaviour for each action
Easy real-time update implementation
Cons
More requests when doing a lot of changes
Needs models to have ids
Additional endpoints to manage
Not meant for bulk operations (save all models at once)
Put the collection's models array into a model
Collections are not meant to be saved. Instead, put the models of the collection into a model which communicates with an API endpoint. This endpoint should expect an object with an array field, which can serve to replace the collection server-side.
var CollectionModel = Backbone.Model.extend({
urlRoot: "collection/endpoint/"
});
var myModel = new CollectionModel();
// ...sometime later...
myModel.save({
arrayAttribute: yourCollection.toJSON()
}, { patch: true });
Pros
One endpoint; always the same call to the API
Easy to implement; just in one place where the save occurs
Cons
All models are transferred on every request regardless of changes
Could be slow if the collection is big
Collection's .sync function is only a proxy to Backbone.sync and do nothing without the correct parameters. It is only used internally within .fetch (line 1055) and isn't meant to be used directly, unless adding a custom behavior.
Supposed I have a MongoDB, and I am storing data in it.
Is there any possibility to get triggered by MongoDB when data is inserted, updated or deleted? I know that there are tailable cursors, but they only work with capped collections.
Anything else?
Basically, is there some kind of "event" in the JavaScript API I could listen to?
MongoDB does not have a concept of "triggers" and instead defers to you to work your own API to handle the tasks you typically associate with SQL Database triggers. The general premise is that the typical tasks of updating related collections and other such things are best handled by changing your schema design approach to include embedded lists and documents within the documents you are dealing with.
Beyond that the design preference is that you wrap your "trigger" logic into your own API from the program's point of view and give that control of such functions.
In the event that you really need to hook into every update/insert/delete you can look at tracking the oplog which is a special capped collection containing all operations from all sources processing on your MongoDB.