converts to double array after declaring req.body - javascript

I'm trying to start a foreach for a variable that can be a single id one or an array of multiple ids. First I try to verify if its an array or not and if not I declare it as an array of only 1 item.
router.post("/providerQuote", function (req, res) {
console.log(req.body.idQuote);
var idQuote = [];
if(req.body.idQuote.isArray)
{
idQuote = Object.values(req.body.idQuote);
}else{
idQuote = [req.body.idQuote];
}
console.log(idQuote);
idQuote.forEach(function (quote){
console.log(quote);
});
This is the console log:
Server Started...
[ '5bfed54c9b0d061574d874c0', '5bfed54c9b0d061574d874bf' ]
[ [ '5bfed54c9b0d061574d874c0', '5bfed54c9b0d061574d874bf' ] ]
[ '5bfed54c9b0d061574d874c0', '5bfed54c9b0d061574d874bf' ]
The problem here is that somehow it is inserting the req.body into another array.

From what I understand you’re saying, the idQuote value in the response may be a string or it may be an array of strings, and you’re trying to just always have a single array of strings. If I’m correct in that, I would just change your code to the following:
router.post("/providerQuote", function (req, res) {
console.log(req.body.idQuote);
var idQuote = req.body.idQuote;
if(!Array.isArray(idQuote)) {
idQuote = [idQuote];
}
console.log(idQuote);
idQuote.forEach(function (quote){
console.log(quote);
});
This will simply wrap your idQuote in an array if it is not currently an array.
By defaulting the idQuote variable to req.body.idQuote, I’m doing a few things:
I’m simplifying the code. If idQuote is actually an array, there’s nothing more to do! We’re good to go.
We aren’t retyping req.body.idQuote multiple times, when we need to use it either way. There’s an acronym in programming called being DRY which means “Do not Repeat Yourself”. If you find yourself doing the same thing multiple times, there’s likely a way to simplify your code.
I’m avoiding doing excessive processing. Your original code created an empty array and then filled it with the values of a different array. No offense at all to you, but that’s quite a bit of unnecessary work (create a whole new array object in memory and then loop through one array copying its values into a new one) when you have a perfectly good array right there that looks exactly the way you want.
Excessive processing costs money: on a server it means you’ll need a bigger server sooner because you can’t handle quite as much as you could have if things were more efficient. On the client excessive processing means battery drain and at times a choppy and laggy user experience. It’s good to look for little things that could be improved because while one tiny thing doesn’t really matter, they can add up quickly if there’s a lot of them in your code, especially if they’re in a long loop. It’s little tricks that you learn along the way that actually help quite a bit.
Also know that isArray is a method on Array. Since you had no parenthesis, you’re basically checking whether or not the current object has an isArray method, and I’m pretty confident that none of it did (because it’s a static method on the Array class), which then pushed you down into the else where you wrapped it in an array.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/isArray

I think you could try this
if ( Array.isArray(req.body.idQuote) ) {
// Perform your operations here
} else {
idQuote.push(req.body.idQuote)
}

Related

What are the performance concerns of chaining multiple Array.prototype.*function*?

Say I have a function f that iterates over an array of n elements to filter it with k conditions.
To improve readability I would be tempted to write it like so:
result = array
.filter(predicate1)
.filter(predicate2)
.filter(predicate3)
.filter(...)
Especially to avoid having to write things like this:
result = array.filter(e => test1(e) && test2(e) && test3(e) && ...)
Of course this above looks ok, but if I have multiple tests that hard to write nicely and it ends up being a mess of indentation... (please consider that)
Having k number of predicates. The complexity seems to be O(k*n), right?
What are the performance risks if:
there are lots of elements in the array (hypothetical lot, I'd actually appreciate your definition of lot)
we execute the function f several times a second or even more
other similar functions could be executed too
I have taken the case of filter because I just faced it, but the question holds for any chained method used when iterating arrays or other iterables.
EDIT:
In fact what I really want to know is the danger of using this kind of design globally.
Of course it is not going to be much of a concern on a single function. I am talking about the consequences on a complete application, where every similar case uses more iterations.
The performance risk is simply that the time it takes to do repeated iterations (and/or to create and then reclaim those temporary arrays) instead of a single iteration creates a perceivable delay in your application, perhaps only on lower-capability hardware. The general advice is: Write what's clear and maintainable, and worry about a performance problem if/when you have a performance problem to worry about.
Whether you'll get one is impossible to say from the information in your question. You've said there are "lots" of entries in the array, but "lots" doesn't tell us much. :-) You've also said you'll be doing this processing several times a second, which does suggest that you'd be better off not unnecessarily looping through the array and creating intermediate arrays. But your mileage may vary.
If this comes up for you a lot, you might consider giving yourself some utility functions, such as:
function multiAndFilter(array, ...filters) {
return array.filter(entry => filters.every(entry));
}
function multiOrFilter(array, ...filters) {
return array.filter(entry => filters.some(entry));
}
...and so on, and then:
result = multiAndFilter(array, predicate1, predicate2, predicate3);
You could maintain readability and mainatability by takeing an array for the predicate functions and iterate them as well.
Then take a single loop for fitering.
predicates = [test1, test2, test3];
result = array.filter(e => predicates.every(fn => fn(e)));

Should I use array methods like map and filter, if I'm not going to return anything?

In the last year I've been using array methods like map and filter more often instead of the standard for loop on an array. It feels simpler to read and write, and does all the things I'm most likely going to do anyway, like create a local variable.
Often times I don't return anything though. Eslint doesn't like me very much though. According to them, they say you always need a return, otherwise its "probably a mistake"
https://eslint.org/docs/rules/array-callback-return
Why? Is just good practice? What are downsides of a return-less array method?
Been thinking on this for a while. Any insight or thoughts would be great.
Should I use array methods like map and filter, if I'm not going to return anything?
No, you should not.
Why? Is just good practice?
Yes. It is a good practice to use the appropriate iteration method for the type of iteration you are doing. There are numerous ways to iterate for a reason. Pick the appropriate mechanism.
What are downsides of a return-less array method?
Using .map() and .filter() without actually returning anything from the callback have the following downsides:
Your code is misleading. The point of .map() and .filter() is to iterate over the array and produce a new array. When a developer reads some code and sees .map() or .filter() being used, they expect that there should be a returned array. When they don't see it being done that way, they will be confused, will initially feel like they don't understand the code. If I were doing a code review on code like this, I would not approve of code like this.
Your code unnecessarily creates objects that are not used. That's just wasteful and is not a good practice. Instead, use an iteration method that does not produce an output array such as for/of, a regular for loop or .forEach().
Your code won't lint. Linters provide objections to things for a reason. Using .map() or .filter() without returning anything from the callback is, just as the linter says, "probably a programming mistake" because that is not how those functions are designed to be used and there are appropriate alternatives when you don't want a returned array.
So, if you're just trying to do an iteration without creating any resulting array, use for/of or .forEach() or some other iteration scheme that isn't specifically designed to create an output array that you don't want.
First you need to know about the difference between Map/Filter and forEach.
resuming.. forEach is mostly used when you want iterate an array with/as a procedure. check
Map and Filter are related to a callback function applied on every iteration.
The return statement of those is what is going to be evaluated, not the function By the Map/Filter function at the end. Reason why it's needed. Althought JS allows "whatever" And of course you are able to define that function what comes to our understanding as "The filter".
For Filter you can see that "true" and "false" as when the "data" is going to be filtered or not.
basically you can loop with map or forEach/for, the difference are the following:
foreach: This iterates over a list and applies some operation with side effects to each list member, this means that you are transforming THE CURRENT ARRAY you are looping.... or as noticed by #TiagoCoelho, you dont mess with the array at all, just loop thought it.
map: This iterates over a list, transforms each member of that list, and returns another list of the same size with the transformed members, this means that you will get a BRAND NEW ARRAY with the modified items and you will also have in memory your old array.
so basically it depends on what you want to do with the data inside of your array.
References
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map
examples:
var a = [1, 2, 3, 4];
var b = [1, 2, 3, 4];
//multiply each item for 2, it will stay on the same array.
a.forEach(i => {
i = i * 2
})
//multiply the items of B for 2 but it will return a new array
var c = b.map(i => {
return i * 2
})
console.log(a); //modified reference
console.log(b); //stays the same
console.log(c); //new array

API/Application Design: Replace with very large array with a new array or modify the existing array

I have a very large array of objects (nested objects).
var array = [
{ a: "a", b: { .. }, c:"c", ... },
{...}
]
After some API call, I get a brand new array with 1 modified element and I know exactly which element is modified.
Is it a good idea (in terms of memory usage and performance) to assign array with the new value or replace only the modified object.
Do I need to modify the API to only send the modified object and update the array accordingly?
The API is developed by another team and it transfers a huge amount of data on each request. I need a solid technical answer to convince them to change the API to send only the required data and do modification at the client side.
If by
"I know exactly which element is modified"
you mean you know the exact position of the modified element, then replacing it is an O(1) operation:
array[positionOfModified] = modified;
Otherwise, you will have to find the element, which is usually an O(N) operation unless you do something like a binary search if the array is sorted (O(logN)).
Therefore, in terms of speed, it could potentially be slower to replace the modified object then to just replace array references:
array = newArray;
However, the space (memory) improvement would likely be much larger than the possible speed regression.
Returning only the modified element will reduce your bandwidth since you'll be sending a single object instead of a large array. If this request happens frequently (many users requesting many times, possibly simultaneously), by returning the whole array every time, you are risking congesting your network.
The application memory usage will also be improved because you'll be overwriting a single object instead of an array, thus the garbage collector will only have to worry about cleaning up the modified object, not the entire previous array. Replacing references of large arrays, especially if this replacement is done often (maybe faster than the GC does its cleanup cycles), could blow up your memory fairly quickly.
Ideally, what you could do is send the modified object and its position in the array back, something like:
{
element: { ... }
position: ...
}
this will allow you to use small memory/bandwidth while keeping the update process a constant operation.
array[response.position] = response.element;

What's wrong with using join and match to implement an inArray in javascript?

I just came up with an inArray implementation for javascript and it's working fine.
Its weird but short, and i've got this feeling that there's something wrong with it, but i'm not sure what it is:
Array.prototype.inArray = function (itm) {
return this.join("|").match( new RegExp('\\b'+itm+'\\b','ig') );
}
UPDATE: this is supposed to be a general implementation of an inArray feature. I'm not sure which is more expensive, doing a loop or creating a regex
I don't know what your implementation requirements are, but keep these in mind:
you'll be matching a stringified version of the Array members, not the actual members
the \\b will allow any word break, including punctuation, giving false positives.
it would only be useful for single word entries in the Array, otherwise you could get false positives
you'll be returning null or and Array of "matches". Not sure if that's what you'd want.
I'm sure these just scratch the surface.
If you are implementing a very narrow functionality, it could work, but would be not be adequate for general use.
Use underscore.js intersection() method to find out if your array contain an element or even a series of elements:
if(_.intersection(yourArray, [item]).length){
//do stuff
}
you can check for multiple items by pushing them into an array too. It also covers any type of object

Javascript/NodeJS: Best way to find an specific object by id inside an array/collection of objects

Overview
So I have pulled out a document from my database. Inside is a nested collection of objects. Each of the objects inside this nested collection has an '_id' attribute. I want to find one on these objects specifically by its '_id' using Javascript.
Example
http://jsfiddle.net/WilsonPage/tNngT/
Alternative Example
http://jsfiddle.net/WilsonPage/tNngT/3/
Questions
Is my example the best way of achieving this?
Will this block in Node.js?
Yes, if you only know a specific value which is contained by one of your objects (which are held in an Array) you need to loop over the whole structure and compare those values.
As you also did right, break the iteration when you found one (return in your example).
So my answer to your first question would be yes, in terms of performance this is the right and best way.
What I don't get is the "Async" example. You just moved the code and changed the structure. Your code is still "blocking" since you're using a normal for-loop to search. If that array would be huge, it would block your node-app for the amount of time the loop needs to finish.
To really make it asynchronously, you would need to get rid of any loop. You would need to loop over the structure with a runway-timer.
var findById = function(collection, _id, cb){
var coll = collection.slice( 0 ); // create a clone
(function _loop( data ) {
if( data._id === _id ) {
cb.apply( null, [ data ] );
}
else if( coll.length ) {
setTimeout( _loop.bind( null, coll.shift() ), 25 );
}
}( coll.shift() ));
};
And then use it like
findById( myCollection, 102, function( data ) {
console.log('MATCH -> ', data);
});
This technique (which is a simplified example), we are creating an self-invoking anonymous function and we pass in the first array item (using .shift()). We do our comparison and if we found the item we are looking for, execute a callback function the caller needs to provide. If we don't have a match but the array still contains elements (check for the .length), we create a timeout of 25ms using setTimeout and call our _loop function again, this time with the next array item, because .shift() gets and removes the first entry. We repeat that until either no items are left or we found the element. Since setTimeout gives other tasks in the JS main thread (on a browser, the UI thread) the chance to do things, we don't block and screw up the whole show.
As I said, this can get optimized. For instance, we can use a do-while loop within the _loop() method and also use Date.now() to do things until we go over the 50ms mark for instance. If we need longer than that, create a timeout the same way and repeat the above described operation again (so its like, do as much operation as possible within 50ms).
I'd pre-sort the array by each item's _id and at least implement a binary search, if not something a little more sophisticated if speed is really an issue.
You could try using binary search, in most cases it's faster than linear search. As jAndy said, you will still block with standard for loop, so take a look at some node asynchronous library. First that falls to my mind is async.js
I messed around with async.js to produce this solution to my problem. I have tried make it a reusable as possible so it is not just locked down to search the '_id' attribute.
My Solution:
http://jsfiddle.net/WilsonPage/yJSjP/3/
Assuming you can generate unique strings from your _id you could hash them out with js's native object.
findById = (collection, _id, callback, timeout = 500, step = 10000)->
gen = ->
hash = {}
for value, i in collection
hash[value._id] = value
unless i % step then yield i
hash[_id]
it = gen()
do findIt = ->
{done, value} = it.next()
if done then callback value
else
console.log "hashed #{(value/collection.length*100).toFixed 0}%"
setTimeout findIt, timeout

Categories