I am trying to sum up the results which got from a series of async call
let sum = 0;
for(const id of ids) {
const res = await getRes(id);
sum += res;
}
Is this a valid way to do it? Any better solution?
The code that you have written seems to be correct.
In order to validate it you can write some unit tests, I can't say how difficult it is because I don't know what getRes is doing and what external dependencies (if any) you have to mock. Generally speaking you should take the habit of unit testing your code: one of the benefits it brings to the table is offering you a way to validate your implementation.
You can also consider the idea of getting the results in parallel, this can actually speed things up. Again I don't know what getResis doing, but I suppose it performs some sort of IO (e.g.: a database query). Given the single thread nature of Javascript, when you have a bunch of independent asynchronous operations you can always try to perform them in parallel and to collect their results, so that you can aggregate them later. Please notice the bold on the word independent: in order to perform a bunch of operations in parallel they need to be independent (if they are not, the correctness of your code will be invalidated).
Since you are performing a sum of number, it is safe to compute the addends in parallel.
This is the simplest possible parallel solution:
async function sum(ids) {
const getResTasks = ids.map(id => getRes(id));
const addends = await Promise.all(getResTasks);
return addends.reduce((memo, item) => memo + item, 0);
}
Pay attention: this is a naive implementation. I don't know how many items the ids array can contain. If this number can possibly be huge (thousand of items or more) a code like the previous one could create an heavy load on the external dependency used to get the sum addends, so some precautions to limit the degree of parallelism should be taken.
Related
I have jQuery autocomplete field that has to search through several thousand items, populated from an IndexedDB query (using the idb wrapper). The following is the autocomplete function called when the user begins typing in the box. hasKW() is a function that finds keywords.
async function siteAutoComplete(request, response) {
const db = await openDB('AgencySite');
const hasKW = createKeyWordFunction(request.term);
const state = "NY";
const PR = 0;
const agency_id = 17;
const range = IDBKeyRange.bound([state, PR, agency_id], [state, PR, agency_id || 9999999]);
let cursor = await db.transaction('sites').store.index("statePRAgency").openCursor(range);
let result = [];
while (cursor) {
if (hasKW(cursor.value.name)) result.push({
value: cursor.value.id,
label: cursor.value.name
});
cursor = await cursor.continue();
}
response(result);
}
My question is this: I'm not sure if the cursor is making everything slow. Is there a way to get all database rows that match the query without using a cursor? Is building the result array slowing me down? Is there a better way of doing this? Currently it takes 2-3s to show the autocomplete list.
I hope this will be useful to someone else. I removed the cursor and just downloaded the whole DB into a javascript array and then used .filter. The speedup was dramatic. It took 2300ms using the way above and about 21ms using this:
let result = await db.transaction('sites').store.index("statePRAgency").getAll();
response(result.filter(hasKW));
You probably want to use an index, where by the term index, I mean a custom built one that represents a search engine index. You cannot easily and efficiently perform "startsWith" style queries over one of indexedDB's indices because they are effectively whole value (or least lexicographic).
There are many ways to create the search engine index I am suggesting. You probably want something like a prefix-tree, also known informally as a trie.
Here is a nice article by John Resig that you might find helpful: https://johnresig.com/blog/javascript-trie-performance-analysis/. Otherwise, I suggest searching around on Google for trie implementations and then figuring out how to represent a similar data structure within an indexedDb object store or indexdDb index on an object store.
Essentially, insert the data first without the properties used by the index. Then, in an "indexing step", visit each object and index its value, and set the properties used by the indexedDb index. Or do this at time of insert/update.
From there, you probably want to open a connection shortly after page load and keep it open for the entire duration of the page. Then query against the index every time a character is typed (probably want to rate limit this call to refrain from querying more than n/second, perhaps using some kind of debounce helper function).
On the other hand, I might be a bit rusty on this one, but maybe you can create an index on the string prop, then use a lower bound that is the entered characters. A string that is lesser length than another string that contains it is present earlier in lexicographic order. So maybe it is actually that easy. You would also need to impose an upper bound that contains the entered characters thus far concatenated with some kind of sentinel value that can never realistically exist in the data, something silly like ZZZZZ.
Try this out in the browser's console:
indexedDB.cmp('test', 'tasting'); // 1
indexedDB.cmp('test', 'testing'); // -1
indexedDB.cmp('test', 'test'); // 0
You essentially want to experiment with a query like this:
const sentinel = 'ZZZ';
const index = store.index('myStore');
const bounds = IDBKeyRange.bound(value, value + sentinel);
const request = index.get(bounds);
You might need to tweak the sentinel, experiment with other parameters to IDBKeyRange.bound (the inclusive/exclusive flags), probably need to store the value in homogenized case so that the search is case insensitive, avoid every sending a query when nothing has been typed, etc.
I'm making a function on a node.js server which reads a CSV file, I need to read all lines and execute several promised operations (MySQL queries) for each one (update or insert, then set an specified column which identifies this item as "modified in this execution") and once this finishes change another column on those not updated or inserted to identify this items as "deleted"
At first, the problem I had was that this CSV has millions of lines (literally) and a hundred of columns, so I run out of memory quite easily, and this number of lines can grow or decrease so I cannot know the amount of lines I will have to process every time I receive it.
I made a simple program that allows me to separate this CSV in some others with a readable amount of lines so my server can work with each one of them without dying, thus making an unknown amount of files each new file is processed, so now I have a different problem.
I want to read all of those CSVs, make those operations, and, once those operations are finished, execute the final one which will change those not updated/inserted. The only issue is that I need to read all of them and I cannot do this simultaneously, I have to make it sequentially, no matter how many they are (as said, after separating the main CSV, I may have 1 million lines divided into 3 files, or 2 millions into 6 files).
At first I though about using a forEach loop, but the problem is that, foreach doesn't respects the promisifying, so it will launch all of them, server will run out of memory when loading all those CSVs and then die. Honestly, using a while(boolean) on each iteration of the foreach to wait for the resolve of each promisified function seems pretty.... smelly for me, plus I feel like that solution will stop the server from working properly so I'm looking for a different solution.
Let me give you a quick explanation of what I want:
const arrayReader=(([arrayOfCSVs])=>{
initialFunction();
functions(arrayOfCSVs[0])
.then((result)=>{
functions(arrayOfCSVs[1])
.then((result2)=>{
functions(arrayOfCSVs[2])
(OnAndOnAndOnAndOn...)
.then((resultX)=>{
executeFinalFunction();
});
});
});
You can use Array.reduce to get the previous promise and queue new promise, without the need for waiting.
const arrayReader = ([arrayOfCSVs]) => {
initialFunction();
return arrayOfCSVs.reduce((prom, csv) => {
return prom.then(() => functions(csv));
}, Promise.resolve()).then(resultX => executeFinalFunction());
}
I am wanting to convert a list of id's into a list of Tasks, and run them concurrently, similar to Promise.all. I am aware of applicatives, but I want to apply an unknown number of tasks so I don't believe that will be the best approach.
Say I have a Task that contains an array of Task's.
Task.of([Task.of(1), Task.of(2)])
Is there anyway to fold the tasks down into a single task that will run them all, or is there a better way that I can handle the data transformation.
The snippet has data.Task included that you can copy if you want to provide an example.
http://folktalegithubio.readthedocs.io/en/latest/api/data/task/
// Task([Task])
Task.of([0, 1, 2])
.map(t => t.map(Task.of))
.fork(console.error, console.log)
<script src="https://codepen.io/synthet1c/pen/bWOZEM.js"></script>
control.async.parallel is exactly what you are looking for.
I am aware of applicatives, but I want to apply an unknown number of tasks so I don't believe that will be the best approach.
That shouldn't hold you back, arrays are traversable and sequenceA would do exactly what you wanted (though quite inefficiently). If it was implemented in folktale, which doesn't feature lists or even a control.applicative.
control.monad.sequence should have been working the same as the applicative sequence, but unnecessarily uses chain instead of ap. And data.task is problematic anyway in that ap is not derivable from chain with the same semantics.
Recently I've come across with a school of thought that advocates replacing conditionals(if/switch/ternary operators, same below) with something else(polymorphism, strategy, visitor, etc).
As I'd like to learn by trying new approaches, I've then revised some of my Javascript codes and immediately found a relevant case, which is basically the following list of ifs(which is essentially the same as a switch/nested ternary operators):
function checkResult(input) {
if (isCond1Met(input)) return result1;
if (isCond2Met(input)) return result2;
if (isCond3Met(input)) return result3;
// More conditionals here...
if (isCondNMet(input)) return resultN;
return defaultResult;
};
Shortly after I've come up with trying a predicate list instead.
Assuming that checkResult always return a String(which applies to my specific case), the above list of ifs can be replaced with a list of predicates(it uses arrow function and find which are ES6+ features though):
var resultConds = {
result1: isCond1Met,
result2: isCond2Met,
result3: isCond3Met,
// More mappings here...
resultN: isCondNMet
};
var results = Object.keys(resultConds);
function checkResult(input) {
return results.find(result => resultConds[result](input)) || defaultResult;
};
(On a side note: Whether checkResult should take resultConds and defaultResult as arguments should be a relatively minor issue here, same below)
If the above assumption doesn't hold, the list of predicates can be changed into this instead:
var conds = [
isCond1Met,
isCond2Met,
isCond3Met,
// More predicates here...
isCondNMet
];
var results = [
result1,
result2,
result3,
// More results here...
resultN
];
function checkResult(input) {
return results[conds.findIndex(cond => cond(input))] || defaultResult;
};
A bigger refactoring maybe this:
var condResults = {
cond1: result1,
cond2: result2,
cond3: result3,
// More mappings here...
condN: resultN,
};
var conds = Object.keys(condResults);
function checkResult(input) {
return condResults[conds.find(cond => isCondMet[cond](input))] || defaultResult;
};
I'd like to ask what're the pros and cons(preferably with relevant experience and explanations) of replacing a list of conditionals with a predicate list, at least in such cases(e.g.: input validation check returning a non boolean result based on a list of conditionals)?
For instance, which approach generally leads to probably better:
Testability(like unit testing)
Scalability(when there are more and more conditionals/predicates)
Readability(for those being familiar with both approaches to ensure sufficiently fair comparisons)
Usability/Reusability(avoiding/reducing code duplications)
Flexibility(for example, when the internal lower level logic for validating inputs changes drastically while preserving the same external higher level behavior)
Memory footprint/Time performance(specific to Javascript)
Etc
Also, if you think the predicate list approach can be improved, please feel free to demonstrate the pros and cons of that improved approach.
Edit: As #Bergi mentioned that Javascript objects are unordered, the ES6+ Maps may be a better choice :)
In general, putting business logic (like these predicates) in separate objects always improves testability, scalability, readability, maintainability and reusability. This comes from the modularisation inherent to this OOP design, and allows you to keep these predicates in a central store and apply whatever business procedures you want on them, staying independent from your codebase. Essentially, you're treating those conditions as data. You can even choose them dynamically, transform them to your liking, and work with them on an abstract layer.
Readability might suffer when you need to go to certain lengths to replace a simple a short condition with the generic approach, but it pays of well if you have many predicates.
Flexibility for adding/changing/removing predicates improves a lot, however the flexibility to choose a different architecture (how to apply which kind of predicates) will worsen - you cannot just change the code in one small location, you need to touch every location that uses the predicates.
Memory and performance footprints will be bigger as well, but not enough to matter.
Regarding scalability, it only works when you choose a good architecture. A list of rules that needs to be applied in a linear fashion might not do any more at a certain size.
I am very much a noob at this, and regarding readabillity and usabillity I prefer the latter but it's a personal preference and I'm usually backwards so you shouldn't listen to me.
The conditional approach is more portable, I mean easily ported to non-object oriented languages. But that may never happen, and even if it does the people doing that kinds of things probably has tools and experience so it shouldn't be an issue. Regarding performance I doubt there will be any significant differences, first one looks like branching and second kind of like random access.
But as I said, you shouldn't listen to me as I'm just guessing
I am writing a scheduling program that returns JSON data about courses. I just started Node.js a week ago so I'm not sure if I'm thinking right.
I am trying to find a better way to write this code and avoid callback hell. I have already written the getJSON method.
/*getJSON(course-name, callback(JSONretrieved)): gets JSON info about name and
takes in a callback to manipulate retrieved JSON data*/
Now I want to get multiple course-name from a course array and check for time conflicts between them. I will then add all viable combinations to an answer array. My current idea is:
/*courseArray: array of classes to be compared
answers: array of all nonconflicting classes*/
var courseArray = ['MATH-123','CHEM-123']
var answers=[]
getJSON(courseArray[0],function(class1data){
getJSON(courseArray[1],function(class2data){
if(noConflict) answers.push( merge(class1data,class2data))
})
)
})
);
Finally, to access the answer array we wrap the entire code from above:
function getAnswers(cb){
/*courseArray: array of classes to be compared
answers: array of all nonconflicting classes*/
var courseArray = ['MATH-123','CHEM-123']
var answers=[]
getJSON(courseArray[0],function(class1data){
getJSON(courseArray[1],function(class2data){
/check for time conflicts between class1data and class2 data
if(noConflict(class1data,class2data)) answers.push( merge(class1data,class2data))
})
)
})
);
cb(answers)
}
and we call the function
getAnswers(function(ans){
//do processing of answers
console.log(ans)
})
My main question is if there is any way to make this code shorter, more readable, or less callback hecky.
You can use a promise library to make things easier for yourself. The way you're doing it can quickly get out of hand if the user selects more than a handful of courses to compare.
With something like async, you can make parallel calls to getJSON and your conflict code will run inside a single callback once all of the getJSON calls have returned. Your code will be much more readable and maintainable for large arrays of courses.