Hey so im making a leaderboard for a discord bot using discord.js And I want to display users by their names instead of their ID's so using discord.js I use the function .fetchUser(ID)
.fetchUser(ID) is a promise which can take a some of time depending on the bandwidth.
So because discord.js uses a promise my code is no longer Async, I thought that by putting the code in a promise it would run Async.
And I was wrong.
my code:
//This is ran inside a .prototype function so (this) is defined
return new Promise((resolve, reject) => {
this.list = [];
//users is an object with user's IDs as the key
//Currently it only has one key in it (mine)
for (let i in users) {
let pos = 0;
let score = this.getScore(users[i]);
if (score === 0) {
client.fetchUser(i).then((user)=> {
console.log(`pushed`);//logs way after the "finish" is logged
this.list.push([user.username.substring(0,13), score])
});
continue;
}
for (let h = 0; h < this.list.length; h++) {
if (score >= this.list[h][1]) {
pos = h;
break;
}
}
client.fetchUser(users[i].id).then((user) => {
this.list.splice(pos, 0, [user.username.substring(0,13), score])
})
}
console.log(`Finished: `+this.list.length);
resolve(this.list);
})
You have to chain off of Promises you receive. Client#fetchUser() returns a Promise which you are waiting on, but not enough. You have to propagate up Promises. If something in your function call chain is asynchronous, you should consider the whole chain async.
You fill this.list from within the fetchUser(...).then(...), which isn't necessarily bad, as long as you don't try to use list until after fetchUser's resolution chain is done. You aren't doing that; you immediately resolve(this.list).
Consider this abbreviated form of your original function:
return new Promise((resolve, reject) => {
this.list = [];
for (let i in users) {
// A promise is created right here
client.fetchUser(i).then((user) => {
// This will populate list AFTER the then callback
this.list.push([user.username.substring(0, 13), score])
});
}
// You aren't waiting until the promise created by fetchUser completes
resolve(this.list);
})
this.list can't be considered "complete" until all the users involved have had their profiles loaded and their scores retrieved. Considering that, we can use Promise.all() which takes an array of Promises and then resolves once all of the provided promises have resolved. So to wait that way, we would do something like this, which still isn't ideal, but waits correctly:
return new Promise((resolve, reject) => {
this.list = [];
// This is an array of Promises
const discordUsersPromise = users.map(user => client.fetchUser(user));
// Wait till all the fetchUser calls are done
const listIsPopulatedPromise = Promise.all(discordUsersPromise).then(dUsers => {
// This replaces your for (let i in users) {}
Object.entries(users).forEach((user, idx) => {
const score = this.getScore(user);
const discordUser = dUsers[idx];
this.list.push([discordUser.username.substring(0, 13), score])
});
});
// We still have to wait for the list to be completely populated
return listIsPopulatedPromise.then(() => this.list);
})
Consider this implementation. I have made some assumptions about your code since you use this.list but don't include what this is an instance of, but most of it should be the same:
/**
* Object to composite certain user properties
* #typedef {RealUser}
* #property {String} user The local string for the user
* #property {User} realUser The user that Discord gives us
* #property {Number} score The score this user has
*/
/**
* Class to encapsulate user and score and data
*/
class Game {
/**
* Constructs a game
*/
constructor() {
/**
* The users we are keeping score of
* #type {Object}
*/
this.users = {};
}
/**
* Get the score of a particular user
* #param {String} user User to get score of
* #returns {Number} User's score
*/
getScore(user) {
return this.users[user] || 0;
}
/**
* Get a composite of users and their status
* #param {String[]} users The users to put on our leaderboard
* #returns {Promise<RealUser[]>} Sorted list of users that we included in our leaderboard
*/
getLeaderBoard(users) {
// Map all the users that we are given to Promises returned bye fetchUser()
const allRealUsersPromise = Promise.all(users.map(user => client.fetchUser(user)
/*
* Create an object that will composite the string that we use
* to note the user locally, the Discord User Object, and the
* current score of the user that we are tracking locally
*/
.then(realUser => ({
user,
realUser,
score: this.getScore(user)
}))));
/*
* Once we have all the data we need to construct a leaderboard,
* we should sort the users by score, and hand back an array
* of RealUsers which should contain all the data we want to
* print a leaderboard
*/
return allRealUsersPromise
.then(scoredUsers => scoredUsers.sort((a, b) => a.score - b.score));
}
/**
* Prints out a leaderboard
* #param {String[]} users The users to include on our leaderboard
*/
printLeaderBoard(users) {
// Go get a leaderboard to print
this.getLeaderBoard(users).then(sortedScoredUsers => {
// Iterate our RealUsers
sortedScoredUsers.forEach((sortedScoredUser, idx) => {
const username = sortedScoredUser.realUser.username;
const score = sortedScoredUser.score;
// Print out their status
console.log(`${username.substring(0, 13)} is in position ${idx + 1} with ${score} points`);
});
});
}
}
const game = new Game();
game.users["bob"] = 5;
game.users["sue"] = 7;
game.users["tim"] = 3;
game.printLeaderBoard(Object.keys(game.users));
Related
I am trying to do:
pull 50 rows from third party API
create payloads for issues
create issues (bulk)
back to step 1. until I pulled all rows from third party API.
Code:
/**
* Create Jira Issues
*
* #param configuration
* #param assetType
* #param allIds
* #param findingsOffset
*/
export const createJiraIssue = async (configuration: IntegrationConfiguration, assetType: string, allIds: any, findingsOffset: number) => {
//HERE I AM GETTING 50 rows from third party API
const findings: any = await getAllFindings(configuration.assets, 'open', assetType, findingsOffset);
const custom_field: string = await storage.get('whvid_custom_field_id');
let jiraIssuePayloads : any = {"issueUpdates": []};
for (const item of findings.collection) {
const i = findings.collection.indexOf(item);
//Check if we can create the Jira issue
if (!allIds.includes(item.id.toString()) && await isVulnerabilityAllowedToCreate(item)) {
//create payloads for issues
}
//CREATE issues bulk
await createIssues(jiraIssuePayloads);
jiraIssuePayloads = null;
//here I am calling same function but with new offset
if (findings['offset'] > 0) {
await createJiraIssue(configuration, assetType, allIds, findings['offset']);
}
}
Third party API call:
export const getAllFindings = async (assets: [], status: string, assetType: string, startOffset: number) => {
const findings = await getFindingByAssetIdAndAssetType(assets, status, assetType, startOffset.toString());
if(findings.page.totalPage !== findings.page.currentPage){
findings.offset = startOffset + limit.FINDINGS;
}
return findings;
}
My app usually stop working after 18th call. With no error no logs anything. I am very new to JS, so maybe problem is async functions.
Any idea what can be problem?
Thank you
Initially, my use case was paginating data with snapshot listener, like here: Firestore Paginating data + Snapshot listener
But the answer there said it is not currently supported, so I tried to find a workaround, which I found here https://medium.com/#650egor/firestore-reactive-pagination-db3afb0bf42e . Which is nice, but kind of complicated. Also, it has potentially n times the downloads as normal because each change in the early listener is also caught by the later listeners as well.
So now, I'm thinking of dropping the pagination. Instead, each time I want to get more data, I will simply recreate the snapshot listener, but with a 2x limit. Like so:
const [limit, setLimit] = useState(50);
const [data, setData] = useState([]);
...
useEffect(()=> {
const datalist = [];
db.collection('chats')
.where('id','==', chatId)
.limit(limit)
.onSnapshot((querySnapshot) =>{
querySnapshot.forEach((item) => datalist.push(item));
setData(datalist);
}
}, [limit]);
return <Button title="get more data" onPress={()=> { setLimit(limit * 2}} />;
My question is, is that bad in terms of excessive downloads (in terms of the spark plan)? When I do snapshot for the first time, it should be downloading 50 items, then for the second time 100, then 200. I'd like to confirm if that's how it works.
Also, if there's a reason this approach won't work on a more funadamental level, I'd like to know.
Each time you perform a query that does not specifically target the local persistence cache, it will retrieve the full set of documents from Firestore. That's the only piece of information you need to know. A subsequent query will not deliver partially cached results from a prior query.
The code you show right now is actually very problematic in that it leaks listeners. There is nothing in place that stops the prior listener if the limit changes and causes the hook to execute again. You should return a function from your useEffect hook that unsubscribes the listener when it's no longer needed. Just return the unsubscribe function returned by onSnapshot. You should also read the documentation for useEffect hooks that require cleanup, as your does here. The potential cost of leaking a listener is potentially much worse than the cost of the repeating the query with new limits, as a leaked listener will continually read new documents as they are changed - that's why you have to unsubscribe as soon as you don't need them any more.
You do understand it correctly.
With your implementation, you will be billed for 50 reads the first time, 100 reads for the second time, 200 reads for the third time and so on (if the number of documents were less than the limit, you will be billed for the number of documents).
I actually do use approach very similar to this approach in one of my published app, but instead of doubling the number of documents to load every time, I add a certain number to the limit.
I do paginated Listeners, by constructing an object that tracks some internal state, and has PageBack, PageForward, ChangeLimit and Unsubscribe methods. For Listeners, it is best to unsubscribe the previous listener and set up a new one; this code does just that. Probably more efficient to add a layer, here: use somewhat larger pages to Paginate from Firestore (a little compute-expensive setting up and tearing down) weighed against actual number of records fetched (actual cost), and then serving smaller pages locally. BUT, for the PaginatedListener:
/**
* ----------------------------------------------------------------------
* #function filterQuery
* builds and returns a query built from an array of filter (i.e. "where")
* consitions
* #param {Query} query collectionReference or Query to build filter upong
* #param {array} filterArray an (optional) 3xn array of filter(i.e. "where") conditions
* #returns Firestor Query object
*/
export const filterQuery = (query, filterArray = null) => {
return filterArray
? filterArray.reduce((accQuery, filter) => {
return accQuery.where(filter.fieldRef, filter.opStr, filter.value);
}, query)
: query;
};
/**
* ----------------------------------------------------------------------
* #function sortQuery
* builds and returns a query built from an array of filter (i.e. "where")
* consitions
* #param {Query} query collectionReference or Query to build filter upong
* #param {array} sortArray an (optional) 2xn array of sort (i.e. "orderBy") conditions
* #returns Firestor Query object
*/
export const sortQuery = (query, sortArray = null) => {
return sortArray
? sortArray.reduce((accQuery, sortEntry) => {
return accQuery.orderBy(sortEntry.fieldRef, sortEntry.dirStr || "asc");
//note "||" - if dirStr is not present(i.e. falsy) default to "asc"
}, query)
: query;
};
/**
* ----------------------------------------------------------------------
* #classdesc
* An object to allow for paginating a listener for table read from Firestore.
* REQUIRES a sorting choice
* masks some subscribe/unsubscribe action for paging forward/backward
* #property {Query} Query that forms basis for the table read
* #property {number} limit page size
* #property {QuerySnapshot} snapshot last successful snapshot/page fetched
* #property {enum} status status of pagination object
*
* #method PageForward Changes the listener to the next page forward
* #method PageBack Changes the listener to the next page backward
* #method Unsubscribe returns the unsubscribe function
* ----------------------------------------------------------------------
*/
export class PaginatedListener {
_setQuery = () => {
const db = this.ref ? this.ref : fdb;
this.Query = sortQuery(
filterQuery(db.collection(this.table), this.filterArray),
this.sortArray
);
return this.Query;
};
/**
* ----------------------------------------------------------------------
* #constructs PaginatedListener constructs an object to paginate through large
* Firestore Tables
* #param {string} table a properly formatted string representing the requested collection
* - always an ODD number of elements
* #param {array} filterArray an (optional) 3xn array of filter(i.e. "where") conditions
* #param {array} sortArray a 2xn array of sort (i.e. "orderBy") conditions
* #param {ref} ref (optional) allows "table" parameter to reference a sub-collection
* of an existing document reference (I use a LOT of structered collections)
*
* The array is assumed to be sorted in the correct order -
* i.e. filterArray[0] is added first; filterArray[length-1] last
* returns data as an array of objects (not dissimilar to Redux State objects)
* with both the documentID and documentReference added as fields.
* #param {number} limit (optional)
* #param {function} dataCallback
* #param {function} errCallback
* **********************************************************/
constructor(
table,
filterArray = null,
sortArray,
ref = null,
limit = PAGINATE_DEFAULT,
dataCallback = null,
errCallback = null
) {
this.table = table;
this.filterArray = filterArray;
this.sortArray = sortArray;
this.ref = ref;
this.limit = limit;
this._setQuery();
/*this.Query = sortQuery(
filterQuery(db.collection(this.table), this.filterArray),
this.sortArray
);*/
this.dataCallback = dataCallback;
this.errCallback = errCallback;
this.status = PAGINATE_INIT;
}
/**
* #method PageForward
* #returns Promise of a QuerySnapshot
*/
PageForward = () => {
const runQuery =
this.unsubscriber && !this.snapshot.empty
? this.Query.startAfter(_.last(this.snapshot.docs))
: this.Query;
//IF unsubscribe function is set, run it.
this.unsubscriber && this.unsubscriber();
this.status = PAGINATE_PENDING;
this.unsubscriber = runQuery.limit(Number(this.limit)).onSnapshot(
(QuerySnapshot) => {
this.status = PAGINATE_UPDATED;
//*IF* documents (i.e. haven't gone back ebfore start)
if (!QuerySnapshot.empty) {
//then update document set, and execute callback
this.snapshot = QuerySnapshot;
}
this.dataCallback(
this.snapshot.docs.map((doc) => {
return {
...doc.data(),
Id: doc.id,
ref: doc.ref
};
})
);
},
(err) => {
this.errCallback(err);
}
);
return this.unsubscriber;
};
/**
* #method PageBack
* #returns Promise of a QuerySnapshot
*/
PageBack = () => {
const runQuery =
this.unsubscriber && !this.snapshot.empty
? this.Query.endBefore(this.snapshot.docs[0])
: this.Query;
//IF unsubscribe function is set, run it.
this.unsubscriber && this.unsubscriber();
this.status = PAGINATE_PENDING;
this.unsubscriber = runQuery.limitToLast(Number(this.limit)).onSnapshot(
(QuerySnapshot) => {
//acknowledge complete
this.status = PAGINATE_UPDATED;
//*IF* documents (i.e. haven't gone back ebfore start)
if (!QuerySnapshot.empty) {
//then update document set, and execute callback
this.snapshot = QuerySnapshot;
}
this.dataCallback(
this.snapshot.docs.map((doc) => {
return {
...doc.data(),
Id: doc.id,
ref: doc.ref
};
})
);
},
(err) => {
this.errCallback(err);
}
);
return this.unsubscriber;
};
/**
* #method ChangeLimit
* sets page size limit to new value, and restarts the paged listener
* #param {number} newLimit
* #returns Promise of a QuerySnapshot
*/
ChangeLimit = (newLimit) => {
const runQuery = this.Query;
//IF unsubscribe function is set, run it.
this.unsubscriber && this.unsubscriber();
this.limit = newLimit;
this.status = PAGINATE_PENDING;
this.unsubscriber = runQuery.limit(Number(this.limit)).onSnapshot(
(QuerySnapshot) => {
this.status = PAGINATE_UPDATED;
//*IF* documents (i.e. haven't gone back ebfore start)
if (!QuerySnapshot.empty) {
//then update document set, and execute callback
this.snapshot = QuerySnapshot;
}
this.dataCallback(
this.snapshot.docs.map((doc) => {
return {
...doc.data(),
Id: doc.id,
ref: doc.ref
};
})
);
},
(err) => {
this.errCallback(err);
}
);
return this.unsubscriber;
};
ChangeFilter = (filterArray) => {
//IF unsubscribe function is set, run it (and clear it)
this.unsubscriber && this.unsubscriber();
this.filterArray = filterArray; // save the new filter array
const runQuery = this._setQuery(); // re-build the query
this.status = PAGINATE_PENDING;
//fetch the first page of the new filtered query
this.unsubscriber = runQuery.limit(Number(this.limit)).onSnapshot(
(QuerySnapshot) => {
this.status = PAGINATE_UPDATED;
//*IF* documents (i.e. haven't gone back ebfore start)
this.snapshot = QuerySnapshot;
this.dataCallback(
this.snapshot.empty
? null
: this.snapshot.docs.map((doc) => {
return {
...doc.data(),
Id: doc.id,
ref: doc.ref
};
})
);
},
(err) => {
this.errCallback(err);
}
);
return this.unsubscriber;
};
unsubscribe = () => {
//IF unsubscribe function is set, run it.
this.unsubscriber && this.unsubscriber();
this.unsubscriber = null;
};
}
I am trying to create a serverless React app to make recommendations by processing some data from Spotify's API. I am using spotify web api js as a wrapper to make the API calls. My problem is that one of the results I get from my functions appears when I call console.log on it, but not when I pass it to another function. Here's the code for the submit handler on my page:
handleSubmit(e) {
e.preventDefault();
this.setState({recOutput: {}});
spotify.setAccessToken(this.props.vars.token);
spotify.searchArtists(this.state.artist)
.then(res => this.getRecs(res))
.then(output => this.processResults(output))
.then(processed => this.setState({resultReceived: true, recOutput: processed}))
.catch(err => this.handleError(err));
}
And here are all the functions it's calling:
/**
* Gets recommendations for a specific artist and outputs recOutput value in redux store
* #param {string} name The name of the artist
* #return Promise with output
* TODO: catch errors again lol
* /
**/
async getRecs(name) {
const MAX_SONGS = 50;
var output = {};
spotify.searchPlaylists(name, {limit: 1})
.then(searchTotal => {spotify.searchPlaylists(name, {limit: 50, offset: Math.floor(Math.random() * (searchTotal.playlists.total/50))}).then(
res => {
for (let i of res.playlists.items) {
spotify.getPlaylistTracks(i.id).then(
pt => {
let curSongs = 0;
if (pt == undefined) {
return;
}
for (let track of pt.items) {
if (curSongs > MAX_SONGS) break;
if (track.track != null
&& track.track.artists[0].name !== name
&& track.track.artists[0].name !== "") {
if (track.track.artists[0].name in output) {
output[track.track.artists[0].name]++;
} else {
output[track.track.artists[0].name] = 1;
}
}
curSongs++;
}
})
}
}
)
}).catch(err => this.handleError(err));
return output;
}
/**
* Processes results from our query to spotify, removing entries beyond a certain threshhold then sorting the object.
* #return Promise for updated results update
*/
async processResults(input) {
debugger;
let processed = {};
for (let key in input) {
console.log(key);
if (input[key]> 10) {
processed.key = input.key;
}
}
return processed;
}
My problem is that when I call .then(output => this.processResults(output)), the process method receives an empty output in the debugger, but when I call .then(output => console.log(output)), I see the expected output for the function.
Here is the context of my component:
I have following code which process a queue and I need to exist the function when there are no messages in the queue and there is no enough time to process more messages. My problem is, it doesn't jump out of the function upon failing the condition and I think it's due to that this a recursive function but I cannot figure it out.
/**
* Check if there is enough time to process more messages
*
* #param {} context
* #returns {boolean}
*/
async function enoughTimeToProcess(context) {
return context.getRemainingTimeInMillis() > 230000;
}
/**
* Consume the queue and increment usages
*
* #param context
*
* #returns {boolean}
*/
async function process(context) {
const messagesPerRequest = queueConst.messagesPerRequest;
const messagesToBeDeleted = [];
const queue = new queueClient();
const messages = await queue.getMessages(messagesPerRequest);
if (messages === undefined) {
if (await enoughTimeToProcess(context) === true) {
await process(context);
} else {
return false;
}
}
const responses = messages.map(async(messageItem) => {
const messageBody = JSON.parse(messageItem.Body);
const parsedMessage = JSON.parse(messageBody.Message);
const accountId = parsedMessage[0].context.accountId;
let code = parsedMessage[0].context.code;
// Our DB support only lowercase characters in the path
code = code.toLowerCase();
const service = parsedMessage[0].name;
const count = parsedMessage[0].increment;
const storageResponse = await incrementUsage(
{ storageClient: storage, code, accountId, service, count }
);
if (storageResponse) {
messagesToBeDeleted.push({
Id: messageItem.MessageId,
ReceiptHandle: messageItem.ReceiptHandle,
});
}
return 1;
});
const processedMessages = await Promise.all(responses);
const processedMessagesCount = processedMessages.length;
if (messagesToBeDeleted.length > 0) {
console.log(`${processedMessagesCount} messages processed.`);
await queue.deleteMessageBatch(messagesToBeDeleted);
}
if (await enoughTimeToProcess(context) === true) {
await process(context);
}
return true;
}
I think the problem can be when messages are undefined and there is still enough time, because the recursive function is going to be called infinite times, because it always accomplishes both conditions, and probably it exceeds the available resources.
Try to sleep some time before calling process function again, just to be sure it is the problem
Since the release of iOS6, my web app has hit a series of bugs, one of the worst being what I'm almost 100% positive is websql transactions being queued. When I first load the app in mobile safari( ipad ), the transactions work fine. Then, if I close safari and open it again the transactions seem to be queued and never execute.
If I open the dev tools and run a simple alert, the methods will fire, if I just hit reload the transactions work fine as well, or if I delay the running of the db transactions by 1sec or something it works fine as well.
I do not want to run a setTimeout to run the transactions.
Is this a caching issue that safari has now since implemented?
If anyone has ANY good ideas on how to fix this please answer below.
Thanks in advance.
It may not be bug. You may be using series of transaction unnecessarily. You could use mulitple requests per transaction. onsuccess callback, you can reuse the transaction. It should work. At the same time, limit number of requests per transaction. setTimeout should never necessary.
Here is how a single transaction is used to insert multiple objects
/**
* #param {goog.async.Deferred} df
* #param {string} store_name table name.
* #param {!Array.<!Object>} objects object to put.
* #param {!Array.<(Array|string|number)>=} opt_keys
*/
ydn.db.req.WebSql.prototype.putObjects = function (df, store_name, objects, opt_keys) {
var table = this.schema.getStore(store_name);
if (!table) {
throw new ydn.db.NotFoundError(store_name);
}
var me = this;
var result_keys = [];
var result_count = 0;
/**
* Put and item at i. This ydn.db.core.Storage will invoke callback to df if all objects
* have been put, otherwise recursive call to itself at next i+1 item.
* #param {number} i
* #param {SQLTransaction} tx
*/
var put = function (i, tx) {
// todo: handle undefined or null object
var out;
if (goog.isDef(opt_keys)) {
out = table.getIndexedValues(objects[i], opt_keys[i]);
} else {
out = table.getIndexedValues(objects[i]);
}
//console.log([obj, JSON.stringify(obj)]);
var sql = 'INSERT OR REPLACE INTO ' + table.getQuotedName() +
' (' + out.columns.join(', ') + ') ' +
'VALUES (' + out.slots.join(', ') + ');';
/**
* #param {SQLTransaction} transaction transaction.
* #param {SQLResultSet} results results.
*/
var success_callback = function (transaction, results) {
result_count++;
result_keys[i] = goog.isDef(out.key) ? out.key : results.insertId;
if (result_count == objects.length) {
df.callback(result_keys);
} else {
var next = i + ydn.db.req.WebSql.RW_REQ_PER_TX;
if (next < objects.length) {
put(next, transaction);
}
}
};
/**
* #param {SQLTransaction} tr transaction.
* #param {SQLError} error error.
*/
var error_callback = function (tr, error) {
if (ydn.db.req.WebSql.DEBUG) {
window.console.log([sql, out, tr, error]);
}
df.errback(error);
return true; // roll back
};
//console.log([sql, out.values]);
tx.executeSql(sql, out.values, success_callback, error_callback);
};
if (objects.length > 0) {
// send parallel requests
for (var i = 0; i < ydn.db.req.WebSql.RW_REQ_PER_TX && i < objects.length; i++) {
put(i, this.getTx());
}
} else {
df.callback([]);
}
};
Regarding transaction queue, it is better to handle by the application rather than by the SQLite for robustness. Basically we can watch transaction complete event before starting a new transaction. It is also fine to run multiple transactions as long as they are under control. Out of control will be opening transactions under a loop. Generally I will open only couple of transactions.
Here is how transaction is queued:
/**
* Create a new isolated transaction. After creating a transaction, use
* {#link #getTx} to received an active transaction. If transaction is not
* active, it return null. In this case a new transaction must re-create.
* #export
* #param {Function} trFn function that invoke in the transaction.
* #param {!Array.<string>} store_names list of keys or
* store name involved in the transaction.
* #param {ydn.db.TransactionMode=} opt_mode mode, default to 'readonly'.
* #param {function(ydn.db.TransactionEventTypes, *)=} oncompleted
* #param {...} opt_args
* #override
*/
ydn.db.tr.TxStorage.prototype.transaction = function (trFn, store_names, opt_mode, oncompleted, opt_args) {
//console.log('tr starting ' + trFn.name);
var scope_name = trFn.name || '';
var names = store_names;
if (goog.isString(store_names)) {
names = [store_names];
} else if (!goog.isArray(store_names) ||
(store_names.length > 0 && !goog.isString(store_names[0]))) {
throw new ydn.error.ArgumentException("storeNames");
}
var mode = goog.isDef(opt_mode) ? opt_mode : ydn.db.TransactionMode.READ_ONLY;
var outFn = trFn;
if (arguments.length > 4) { // handle optional parameters
var args = Array.prototype.slice.call(arguments, 4);
outFn = function () {
// Prepend the bound arguments to the current arguments.
var newArgs = Array.prototype.slice.call(arguments);
//newArgs.unshift.apply(newArgs, args); // pre-apply
newArgs = newArgs.concat(args); // post-apply
return trFn.apply(this, newArgs);
}
}
outFn.name = scope_name;
var me = this;
if (this.mu_tx_.isActive()) {
//console.log(this + ' active')
this.pushTxQueue(outFn, store_names, mode, oncompleted);
} else {
//console.log(this + ' not active')
var transaction_process = function (tx) {
me.mu_tx_.up(tx, scope_name);
// now execute transaction process
outFn(me);
me.mu_tx_.out(); // flag transaction callback scope is over.
// transaction is still active and use in followup request handlers
};
var completed_handler = function (type, event) {
me.mu_tx_.down(type, event);
/**
* #preserve_try
*/
try {
if (goog.isFunction(oncompleted)) {
oncompleted(type, event);
}
} catch (e) {
// swallow error. document it publicly.
// this is necessary to continue transaction queue
if (goog.DEBUG) {
throw e;
}
} finally {
me.popTxQueue_();
}
};
if (ydn.db.tr.TxStorage.DEBUG) {
window.console.log(this + ' transaction ' + mode + ' open for ' + JSON.stringify(names) + ' in ' + scope_name);
}
this.storage_.newTransaction(transaction_process, names, mode, completed_handler);
}
};
As it turns out, initializing Facebook before websql was causing the problem. After commenting out FB the app behaved properly, which is why setTimeout solved the issue as well; the fb api was ready. How the thread of execution gets blocked, I don't know.
So, to anyone using FB and then trying to execute websql transactions...delay FB!
Though, websql is still running a bit slow on safari load...