I am using PhoneGap and jQuery Mobile. I am trying to get some JSON data from a remote location and then populate a local WebSQL database with it. Here is my JS function:
function getLocations() {
var tx = window.openDatabase('csdistroloc', '1.0', 'Distro DB', 1000000);
tx.transaction(function(tx) {
tx.executeSql('DROP TABLE IF EXISTS locations'); //this line works!
tx.executeSql('CREATE TABLE IF NOT EXISTS locations (id, name, address, postalcode, phone, category)'); //this line works!
$.ajax({
url: "http://mydomain.com/api.php",
dataType: 'json',
data: { action: "getlocations" },
success: function(data) {
tx.executeSql("INSERT INTO locations (id, name, address, postalcode, phone, category) VALUES (2,'cheese','232','seven',5,6)"); //this line produces an error
}});
}, dberror, dbsuccess);
}
Running the above function gives me an error "INVALID_STATE_ERR: DOM Exception 11" on the line noted above. It does the same thing when I am actually trying to use the returned JSON data to insert data. I have also tried the $.getJSON technique with the exact same result.
Any advice would be appreciated!
Although the accepted answer is correct, I would like to expand upon it because I encountered the same problem and that answer doesn't say why it doesn't work as the OP had it.
When you create a transaction in Web SQL, the transaction processing remains alive only so long as there are any statements queued up in the transaction. As soon as the pipeline of statements in the transaction dries up, the engine closes (commits) the transaction. The idea is that when the function(tx) { ... } callback runs,
It executes all of the statements it need to. executeSql is asynchronous, so it returns immediately even though the statement has not yet been executed.
It returns control back to the Web SQL engine.
At this point the engine notices that there are statements queued up and runs them to completion before closing the transaction. In your case, what happens is this:
You call executeSql twice to queue up two statements.
You request something through ajax.
You return
The engine runs the two statements that it has queued up. The ajax request is also running asynchronously but it must access the network which is slow so it likely has not completed yet. At this point, the statement queue is empty and the Web SQL engine decides that it's time to commit and close the transaction! It has no way of knowing that there is going to be another statement coming later! By the time the ajax call returns and it attempts to execute the INSERT INTO locations, it's too late, the transaction is already closed.
The solution suggested by the accepted answer works: don't use the same transaction inside the ajax callback but create a new one. Unfortunately, it has the pitfall you would expect from using 2 transactions instead of 1: the operation is no longer atomic. That may or may not be important for your application.
If atomicity of the transaction is important for you, your only 2 recourses are:
Do everything (all 3 statements) in one transaction inside the ajax callback.
This is what I recommend. I think it's very likely that waiting until after the ajax request completes before creating the table is compatible with your application requirements.
Perform the ajax request synchronously as explained here.
I don't recommend that. Asynchronous programming in JavaScript is a good thing.
By the way, I encountered the problem in the context of Promises, in code that looked something like this:
// XXX don't do this, it doesn't work!
db.transaction(function(tx) {
new Promise(function(resolve, reject) {
tx.executeSql(
"SELECT some stuff FROM table ....", [],
function(tx, result) {
// extract the data that are needed for
// the next step
var answer = result.rows.item( .... ).some_column;
resolve(answer);
}
)
}).then(function(answer) {
tx.executeSql(
"UPDATE something else",
// The answer from the previous query is a parameter to this one
[ ... , answer, .... ]
)
});
});
The problem is that, with promises, the chained .then() clause is not run immediately upon resolution of the original promise. It is only queued for later execution, much like the ajax request. The only difference is that, unlike the slow ajax request, the .then() clause runs almost immediately. But "almost" is not good enough: it may or may not run soon enough to slip in the next SQL statement into the queue before the transaction gets closed; accordingly, the code may or may not produce the invalid state error depending on timing and/or browser implementation.
Too bad: Promise would have been useful to use inside SQL transactions. The above pseudo-example can easily be rewritten without promises, but some use cases can greatly take advantage of chains of many .then()s as well as things like Promise.all that can make sure that an entire collection of SQL statements run in any order but all complete prior to some other statement.
I would first suggest not naming your database 'tx' but rather db or database. This could be a variable naming problem since both the function parameter and your database variables are called "tx"
EDIT: I had this same problem and solved it by making the query within the callback it's own transaction. Like so:
success: function(data) {
tx.transaction(function(transaction){
transaction.executeSql("INSERT INTO locations (id, name, address, postalcode, phone, category)
VALUES (2,'cheese','232','seven',5,6)"); //now more DOM exception!
}
}}
I think the problem is by the time the callback is fired the outer transaction has completed because webSQL's transactions are not synchronous.
We do have a way to lock the transaction, while you do any AJAX or other async operation. Basically before calling AJAX you need to start a dummy db operation and on success of that operation check if AJAX is done or not, and again call the same dummy operation till your AJAX is done. When AJAX is done you can now reuse the transaction object do next set of executeSQLs. This approach is thoroughly explained in this article here. (I hope someone will not delete this answer too, as someone did earlier on a similar question)
Try this approach
Related
I have an application which listens to a websocket endpoint and processes the data received from it and saves it to a database.
The problem of race condition arises when two callbacks are invoked concurrently (for example: one task may begin processing, then another task may begin processing and update the database, then the first task may update the database - so in the end the database updates are out of order).
The solution I thought of was to record the exact time a callback is called, process the data, then attach the time to the data passed to the database and in the database compare this time with the last update time and act accordingly.
One possible problem I thought of is that the time may be recorded out of order (for example: consider the scenario where the first callback is called, then the second callback is called and the time is recorded, then the time is recorded for the first callback).
How would you do it the right way? Solutions to this problem or other ways to go about it?
EDIT: To be more specific as I'm intending for the program to be as real-time as possible I'd like to allow for the most up-to-date callback to be processed without delay (without waiting for all other previous callbacks to entirely process) but to ensure that the end result of the processing (as is recorded in the database) adheres to the order in which the callbacks arrived (is not corrupt)
You can the data handler callback return a promise of when it's finished.
Each time you get a new data from the socket, wait for that promise before handling it, then store the resulting promise for the next data to wait for.
That would look like this:
const ready = Promise.resolve();
socket.on(..., data => {
ready = ready.then(() => processData(data);
});
This will have no effect on any other code.
EDIT: To do expensive work outside the lock, you can write
socket.on(..., data => {
const result = doExpensiveWork(data); // Returns a promise
ready = Promise.all(result, ready).then(([result]) => insertData(result));
});
I'm running into problems with the validated method package in my app tests. I'm calling my methods through the _execute function in order to be able to pass a userId to simulate a logged-in user while testing. The problem is that my asserts right underneath that _execute are called before the method has a chance of completing. I know my test works though because it only happens sometimes, mostly because mongo isn't always returning results quite as fast.
I looked around and found a todos app that uses the _execute function in its tests. I can't get those tests to fail no matter how many times I rerun them, though.
This is an example of my test code.
describe('clients.add', function() {
it('should add an empty (draft) client', function() {
const res = clients_add._execute({ userId: 'CURRENTUSERID' }, { company_id: c1._id });
assert.instanceOf(res, Mongo.ObjectID, 'method returns the newly created clients ID');
const db_client = Clients.findOne(res);
assert.isTrue(db_client.draft, 'client is drafted');
assert.isDefined(db_client.created, 'there\'s a created date');
});
});
clients_add does quite a few permission checks and can therefor take a little while before completing. Rerunning this test 20 times will fail about 5 times and pass the other 15.
Shouldn't the _execute function be synchronous? How do I make it? What am I missing?
In server code, if you provide a callback to database modification functions like insert, it returns the created ID instantaneously, and runs the callback only once the database has acknowledged the write. If you don't provide a callback, the insert call is synchronous and throws an error if the operation fails. See more about this in Meteor docs.
It seems that you have provided an error-handling callback to the insert-function in your method code. This causes the inconsistent behavior, since the database might not actually have had time to do the write before you call findOne in your test. Also, this is redundant since if an error occurs in the insert, the method has already returned and the error is never shown to the user. It's better to simply omit the error-handling callback altogether:
return Clients.insert(new_client);
In a language with threads and locks it is easy to implement a lazy load by checking the value of a variable, if it's null then lock the next section of code, check the value again and then load the resource and assign. This prevents it from being loaded multiple times and causes threads after the first to wait for the first thread to complete the action that's needed.
Psuedo code:
if(myvar == null) {
lock(obj) {
if(myvar == null) {
myvar = getData();
}
}
}
return myvar;
JavaScript runs in a single thread, however, it still has this type of issue because of asynchronous execution while one call is waiting on a blocking resource. In this Node.js example:
var allRecords;
module.exports = getAllRecords(callback) {
if(allRecords) {
return callback(null,allRecords);
}
db.getRecords({}, function(err, records) {
if (err) {
return callback(err);
}
// Use existing object if it has been
// set by another async request to this
// function
allRecords = allRecords || partners;
return callback(null, allRecords);
});
}
I'm lazy loading all the records from a small DB table the first time this function is called and then returning the in-memory records on subsequent calls.
Problem: If multiple async requests are made to this function at the same time then the table is going to be loaded unnecessarily from the DB multiple times.
In order to solve this I could simulate a locking mechanism by creating a var lock; variable and setting it to true while the table is loading. I would then put the other async calls into a setTimeout() loop and check back on this variable every (say) 1 second until the data was available and then allow them to return.
The problems with that solution are:
It's fragile, what if the first async call throws and doesn't unset the lock.
How many times do we loop back into the timer before giving up?
How long should the timer be set for? In some environments 1 second might be way too long and inefficient.
Is there a best practise for solving this in JavaScript?
On the first call to the service, initialize an array. Start the fetch operation. Create a Promise, store it in the array.
On subsequent calls, if the data is there, return an already-fulfilled Promise. If not, add another Promise to the array and return that.
When the data arrives, resolve all the waiting Promise objects in the list. (You can throw away the list once the data's there.)
I really like the promise solution in the other answer -- very clever, very interesting. Promises aren't the dominent methodology, so you may need to educate the team. I'm going to go in another direction though.
What you're after is a memoize function -- an in-memory key/value cache of expensive results. JavaScript the Good Parts has a memoize sample towards the end. Lodash has a memoize function. These assume synchronous processing so don't account for your scenario -- which is to say they'd hit the database lots of times until one of the "threads" replied.
The async library also has a memoize function that does exactly what you want. In it's innards, it keeps a queue array of callbacks, and once it gets the answer, it both caches it and calls all the callbacks.
If you're into inventing, by all means, use promises. If you'd just like a plug-n-play answer, use async#memoize.
if I have a code like this
if (request.params.friends != null)
{
_.each(request.params.friends, function(friend) {
// create news
var News = Parse.Object.extend("News");
var news = new News();
news.set("type", "ask");
news.save();
});
response.success();
}
and the length of request.params.friends is 2, does the second news get saved for certain? If not, how to make sure it gets saved? I looked at Parse.Promise documentation and in all the examples, the loop is inside a query or a save. Do I need to save the first news first and then create the Promise? I still don't get how "asynchronous" works.. Does the response.success() work like a return or break?
The loop does get executed twice.
response.success() acts like a return.
The asynchronous magic is in the "save" method. When "save" is called, the Parse.com says, "ok, you want me to save it. I'll save it, but not now. For now, here is a promise that I'll save it later." The save method returns an Promise object and the promise will be fulfilled when the object is actually saved.
So what happens is a little like
First time through the loop: create friend #1.
Ask Parse to save friend #1.
Second time through the loop: create friend #2.
Ask Parse to save friend #2.
Return successful response.
Parse actually saves friend #1 and friend #2
It's been a while since I've used Parse, but I'm not sure usually both the friend objects would actually get saves. Calling response.success() could kill work-in-progress. Here is an alternative implementation:
var objectsToSave = _.collect(request.params.friends, function(friend) {
var news = new News();
news.set({type : "ask"});
return news;
});
Parse.Object.saveAll(objectsToSave, {
success: function(list) {
// All the objects were saved.
response.success();
},
error: function(error) {
// An error occurred while saving one of the objects.
},
});
The saveAll function saves all the objects at once. It's usually faster than saving objects one-at-a-time. In addition to providing saveAll with the objects to save, we provide it an object with a success function and an error function. Parse.com promises only to call the functions AFTER the save is complete (or it experienced an error).
There are a few other things going on. The Parse.Object.extend statement belongs in a different place in your code. Also, the set function doesn't take a list of strings. It takes a JavaScript object.
I'm having trouble deleting data from IndexedDb. The methods works fine with WebSql but throws an error with IndexedDb. Initial population seems to work ok.
This error is in Chrome:
Uncaught TypeError: Cannot read property 'ABORT_ERR' of undefined IndexedDbProvider.js:627
self.db.transaction.setCallbacks.onerror
the code has been moved to jsFiddle here
It's worth noting there is no error if I simply call remove(). The error appears to happen exclusively when I attempt to saveChanges().
Seeing as I (potentially) have your attention is toArray and forEach synchronous? so I could reduce transactions on the save?
dvContext.Data.remove(data) is just a typo ? should be dContext.Datas.remove(data);
Some of JayData's functions are sync and other methods are async but in a logical way :)
When JayData touches the local database or calls a remote method over the network then those functions are async. Let's go through your code:
dContext.Datas
is a filter (Queryable) which selects all records, as long as you just build the filter the calls are sync, so
dContext.Datas.filter().take().skip().orderBy().map()
just builds up the filter in memory and does nothing else so they're all sync, then:
dContext.Datas.toArray()
toArray fires the real actions, executes the query, here we have to touch the local database, so it is async, and when it finished it can call a callback function or resolves the promise it returned earlier.
The same applies to remove, it just drops the record into a set (so it's sync) and saveChanges() does the real work (it's async).
Your problem imho is that your toArray has both a callback function and a then branch so both will be called parallel and the then branch will not wait for the finish of the callback. The solution:
dContext.Datas.toArray()
.then(function(data) {
// your removes in a loop
// important: return the promise
return dContext.saveChanges();
})
.then(function() {
// whatever
})