I am building a billing web app where users update stock of an item frequently. So it will cost me much. Is there a way to only update the stock without causing a document write?
Short answer: no
Long answer: most likely not
Each document updated will count as a document write for billing on Firestore. Even if you try to bundle multiple updates into a single batch call.
One suggestion would be to throttle updates on the client side to slow down rapid updates. This will however affect the accuracy of the data your end users will get while using the app. Example of throttling: Small article on throtting/debouncing
Another suggestion is to avoid making updates that are meaningless.
You can check if a value isn't going to change or if the value change isn't important before trying to do an update:
function updateValue(newValue) {
// Check if value has changed
if (newValue === value) {
// pass
return;
}
// make an update request
// ...
}
Any other suggested solution would require your end users sharing the same network or introducing another service outside of Firestore.
Last thing to consider is that maybe it's too soon to consider billing. Firestore is pretty cheap for what it offers. Here's a good video by Firebase about billing:
Firebase video on Getting to know pricing
If none of these suggestions has been helpful, I'd recommend adding some code examples, etc for better suggestions :)
Related
In my app, I have a list that requires an "or" condition. But, as the docs say:
In this case, you should create a separate query for each OR condition and merge the query results in your app.
As a result, in my service, I'm managing two queries and surfacing them as a single observable list to consumers.
The problem comes in with updating. I have the choice of doing extra work to match up the item needing update to the correct collection so I can do the following:
myCollection.doc(item.id).update(item);
or I can make this much more simple and just:
angularFirestore.doc(`path/to/${item.id}`).update(item);
I'm operating under the assumption that the first method will result in faster updates as I'm using the same reference that it would optimistically update instantly. And that the latter will be slower in that it would be more round about by updating the persistence layer and then the collection referencing getting notified about later (probably still a small time).
All of the above is assumption, however. I back this just with a few random instances where I've seen it take a second or two for an update or delete to show up in an other part of the view, but I haven't been able to actually inspect the process.
Does anyone know if the above is correct? Should I be doing the extra work to write through the collection references or does angularfire(and/or firestore) handle this and make them effectively the same operation under the hood?
AngularFire2 is a thin wrapper around RxFire, which itself is a relatively thin wrapper around the Firebase JavaScript SDK.
There should be no significant performance difference between updating a document through AngularFire or updating it directly through the JavaScript SDK. In both cases the majority of the time is spent in the JavaScript SDK, and on the wire between the client and server. For this reason I typically update directly through the JavaScript SDK, since it's often a bit more direct and the AngularFire abstraction has little advantage for me in write operations. Given that AngularFire is built on top of this SDK, it picks up the changes instantly even when they're not made through AngularFire.
If you have an instance where this does not seem to be the case, I recommend creating a question with the minimal, complete/standalone code that reproduces that problem.
I'm a computer science tutor, with a lot of experience teaching Python and Java but without a lot of web experience. I'm now working with a high school student who has been assigned a project that he wishes to implement in HTML and JS. So this is a good chance for me to learn more about this type of application.
It's an application for taking food orders on the web and showing an illustration of your order. You choose a main dish and any custom alterations. Then you choose another course of the meal, maybe a salad and any alterations. And so on. The first page it shows you will show an empty plate. You choose a course to customize and it will take you to a page that shows options, then another page with further options, and so forth. Each time you are finished configuring an option, it will take you back to the starting page and show a picture of the meal so far on the plate (which was formerly empty).
The main part I'm unfamiliar with is handling persistent state. Although each page will have a unique structure of images (possibly a canvas) and buttons, it will have to remember the order as configured as it loads each page.
I'm wondering what a simple way of handling this is. This is a high school student with very little prior programming experience, and I'm allowed to help her, but it has to be within her grasp to understand overall.
Perhaps sessionStorage is the best bet?
Regarding possible duplication of Persist variables between page loads, my needs are a lot more complex than that question--I need to store more than a single global variable, and I may need a framework to simplify this. In particular, I'm interested in doing this a way that is simple enough for a high school student to understand so that he can implement some of it himself (at least some of it has to be his own work). I don't know if using a framework will make the job simpler (that would be good) or whether it will require more effort to understand the framework (especially relative to an inexperienced student -- not good).
Perhaps sessionStorage is the best bet?
If you want the state to be automatically expired when the browser session ends. Otherwise, use localStorage, which will persist longer than that.
It's important to remember that web storage (both sessionStorage and localStorage) only stores strings, so a common technique is to use it to store JSON that you parse on load, so you can have complex state. E.g.:
// On load
var state = JSON.parse(localStorage.getItem("state-key"));
if (!state) {
// None was stored, create some
state = {/*...default state...*/};
}
// On save
localStorage.setItem("state-key", JSON.stringify(state));
Note that this bit:
var state = JSON.parse(localStorage.getItem("state-key"));
if (!state) {
relies on the fact that getItem returns null if there is no stored data with that key, and JSON.parse coerces its argument to string. null coerces to "null" which will be parsed back into null. :-)
Another way to do it is to use ||:
var state = JSON.parse(localStorage.getItem("state-key")) || {
// default state here
};
Let's say I have a couchDB database called "products" and a frontend with a form.
Now if a user opens a document from this database in the form I want to prevent other user from editing this specific document.
Usually pretty simple:
-> read document from couchDB
-> set a variable to true like: { edit : true }
-> save (merge) document to couchDB
-> if someone else tries to open the document he will receive an error, becaus of edit:true.
BUT, what if two user open the document at the exact same time?
The function would be called twice and when the second one opens the document he would falsely receive an edit:false because the first didn't had enough time to save his edit:true. So how to prevent this behaviour?
First solution would be:
Build an array as a cue for database requests and dont allow parallel requests, so all requests would be worked off one after another. But in my opinion this is a bad solution because the system would be incredible slow at some point.
Second solution:
Store the documentIDs of the currently edited documents in an local array in the script. This would work because this is no asynchronous process and the second user would receive his error immediately.
So far so good, BUT, what if some day there are too many user and this system should run in a cluster (the node client server, not the database) - now the second solution would not work anymore because every cluster slave would have its own array of documentIDs. Sharing there would end in another asynchronous task and result in the same problem above.
Now i'm out of ideas, how do big clustered systems usually handle problems like that?
CouchDB uses MVCC to maintain consistency in your database. When a document is being updated, you must supply both the ID (_id) and revision number (_rev) otherwise your change will be rejected.
This means that if 2 clients read the document at revision 1 and both attempt to write a change using that same revision number, only the first will be accepted by the database. The 2nd client will receive an error, and it should fetch the latest revision of the document in order to proceed.
In a single-node environment, this model prevents conflicts outright. However, in cases where replication is occurring, it is still possible to get conflicts, even when using MVCC. This is because conflicting revisions can technically be written to different nodes before they have been replicated to one another. In this case, CouchDB will record the conflict and your application is responsible to resolve them.
CouchDB has stellar documentation, in particular they have an article all about conflicts and replication that I highly recommend for this subject.
I'm dealing with a promblem for a couple of days, and I'm really hoping, you could help me.
It's a node.js based API using sequelize for MySQL.
On certain API calls the code starts SQL transactions which lock certain tables, and if I send multiple requests to the API simultaneously, I got LOCK_WAIT_TIMEOUT errors.
var SQLProcess = function () {
var self = this;
var _arguments = arguments;
return sequelize.transaction(function (transaction) {
return doSomething({transaction: transactioin});
})
.catch(function (error) {
if (error && error.original && error.original.code === 'ER_LOCK_WAIT_TIMEOUT') {
return Promise.delay(Math.random() * 1000)
.then(function () {
return SQLProcess.apply(self, _arguments);
});
} else {
throw error;
}
});
};
My problem is, the simultaneously running requests lock each other for a long time, and my request returns after a long-long time (~60 seconds).
I hope I could explain it clear and understandable, and you could offer me some solution.
This may not be a direct answer to your question, but maybe by looking at why you had this problem would also help.
1) What does that doSomething() do? Anyway we can do some improvements there?
First, a transaction that take 60 sec is suspicious.. If you lock a table for that long, chances are the design should be revisited. Given a typical db operation runs 10 - 100 ms.
Ideally, all the data preparation should be done outside of the transaction, including data read from database. And the transaction should be really for only transactional operations.
2) Is it possible to use mysql stored procedure?
True, the stored procedure for mysql is not compiled, as PL/SQL for Oracle. But it is still running on the database server. If your application is really complicated and contain a lot of back and force network traffic between database and your node application in that transaction, and considering there are so many layer of javascript calls, it could really slowing things down. If 1) doesn't save you a lot of time, consider using mysql stored procedure.
The drawback of this approach, obviously, is that it is harder to maintain the codes in both nodejs and mysql.
If 1) and 2) are definitely not possible, you may consider some kind of flow control or queuing tool. Either your app make sure the 2nd request doesn't go until the first one finishes, or your have some 3rd party queuing tools to handle that. Seems you don't need any parallelism in running those requests anyway.
The main reason for deadlocks is poor database design. Without further information about your database design and which exact queries might or might not lock each other it is impossible to give you a specific solution for your problem.
However I can give you a general advice/approach to solve this issue:
I would make sure that your database is normalized at least into Third Normal Form or, if that still isnt enough even further. There might be tools to automate this process for you.
Aside from reducing the likelihood of deadlocks this also helps keeping your data consistent, which is always a good thing.
Keep your transactions as slim as possible. If you are inserting new rows into your tables and update other tables accordingly you might want to use a Trigger rather than another SQL statement to do so. The same applies to reading rows and values. Such things can be done before or after your transaction.
Choose the correct Isolation Level. Possible isolation levels are:
READ_UNCOMMITTED
READ_COMMITTED
REPEATABLE_READ
SERIALIZABLE
Sequelize's official documentation describes how you can set the isolation level and lock/unlock transactions by yourself.
As I said, without further insight about your database and query design thats all I can do for you right now.
Hope this helps.
I'm working on a real-time JavaScript Application that requires all changes to a database are mirrored instantly in JavaScript and vise versa.
Right now, when changes are made in JavaScript, I make an ajax call to my API and make the corresponding changes to the DOM. On the server, the API handles the request and finishes up by sending a push using PubNub to the other current JavaScript users with the change that has been made. I also include a changeID that is sequential to JavaScript can resync the entire data set if it missed a push. Here is an example of that push:
{
"changeID":"2857693",
"type":"update",
"table":"users",
"where":{
"id":"32"
},
"set":{
"first_name":"Johnny",
"last_name":"Applesead"
}
}
When JavaScript gets this change, it updates the local storage and makes the corresponding DOM changes based on which table is being changed. Please keep in mind that my issue is not with updating the DOM, but with syncing the data from the database to JavaScript both quickly and seamlessly.
Going through this, I can't help but think that this is a terribly complicated solution to something that should be reasonably simple. Am I missing a Gotcha? How would you sync multiple JavaScript Clients with a MySQL Database seamlessly?
Just to update the question a few months later - I ended up sticking with this method and it works quite well.
I know this is an old question, but I've spent a lot of time working on this exact same problem although for a completely different context. I am creating a Phonegap App and it has to work offline and sync at a later point.
The big revelation for me is that what I really need is a version control between the browser and the server so that's what I made. stores data in sets and keys within those sets and versions all of those individually. When things go wrong there is a conflict resolution callback that you can use to resolve it.
I just put the project on GitHub, it's URL is https://github.com/forbesmyester/SyncIt