I like functional programming, it keeps my code, especially scopes, cleaner.
I found myself having a pretty heavy Array manipulation in my code, like this:
this.myArray = someArray
.slice(0, n)
.map(someFunction)
// more manipulation;
if (condition) {
this.myArray = this.myArray.reverse();
}
this.myArray = this.myArray
.reduce(anotherFunction, [])
// even more manipulation
Is there some built-in way to join the if to my functional chain? Something like:
this.myArray = someArray
.slice(0, n)
.map(someFunction)
// ... more manipulation
[condition ? 'reverse' : 'void']()
.reduce(anotherFunction, [])
// ... even more manipulation
The void() method doesn't exist. Is there an alternative? Is it popular approach to merge multiple calls to a single chain, even if that means calling methods that do nothing?
I know I can add my own method to Array:
Array.prototype.void = function () {
return this;
}
But that's not the point. Is there any standard/built-in way to achieve the same effect?
As neutral function, you could take
Array#concat, which returns a new array with old items, or
Array#slice, which returns as well a new array.
Is it popular approach to merge multiple calls to a single chain, even if that means calling methods that do nothing?
No. The usual approach is to split the chain, similar to how you wrote it in your first snippet, when there is an optional step. You wouldn't however repeatedly assign to this.myArray, you would use constant temporary variables:
const array1 = someArray.slice(0, n).map(someFunction) // more manipulation;
const array2 = condition ? array1.reverse() : array1;
this.myArray = array2.reduce(anotherFunction, []) // even more manipulation
That said, in functional programming that uses functions, not methods, you sometimes do find the approach of having a configurable chain. They don't need a void method on the object, they just use the identity function.
Example in Haskell:
let maybeReverse = if condition then reverse else identity
let myArray = fold anotherFunction [] $ maybeReverse $ map someFunction $ take n someArray
Example in JavaScript (where you don't have as many useful builtins and need to write them yourself):
const fold = (fn, acc, arr) => arr.reduce(fn, acc);
const reverse = arr => arr.reverse(); // add .slice() to make pure
const identity = x => x;
const map = (fn, arr) => arr.map(fn);
const take = (n, arr) => arr.slice(0, n);
const maybeReverse = condition ? reverse : identity;
const myArray = fold(anotherFunction, [], maybeReverse(map(someFunction, take(n, someArray)))));
Btw, in your particular example I wouldn't use reverse at all, but rather conditionally switch between reduce and reduceRight :-)
Another option is to switch reduce() with reduceRight() which wouldn't add an extra step at all for the case shown
this.myArray = someArray
.slice(0, n)
.map(someFunction)
[condition ? 'reduce' : 'reduceRight'](anotherFunction, [])
Related
Are there any substantial reasons why modifying Array.push() to return the object pushed rather than the length of the new array might be a bad idea?
I don't know if this has already been proposed or asked before; Google searches returned only a myriad number of questions related to the current functionality of Array.push().
Here's an example implementation of this functionality, feel free to correct it:
;(function() {
var _push = Array.prototype.push;
Array.prototype.push = function() {
return this[_push.apply(this, arguments) - 1];
}
}());
You would then be able to do something like this:
var someArray = [],
value = "hello world";
function someFunction(value, obj) {
obj["someKey"] = value;
}
someFunction(value, someArray.push({}));
Where someFunction modifies the object passed in as the second parameter, for example. Now the contents of someArray are [{"someKey": "hello world"}].
Are there any drawbacks to this approach?
See my detailed answer here
TLDR;
You can get the return value of the mutated array, when you instead add an element using array.concat[].
concat is a way of "adding" or "joining" two arrays together. The awesome thing about this method, is that it has a return value of the resultant array, so it can be chained.
newArray = oldArray.concat[newItem];
This also allows you to chain functions together
updatedArray = oldArray.filter((item) => {
item.id !== updatedItem.id).concat[updatedItem]};
Where item = {id: someID, value: someUpdatedValue}
The main thing to notice is, that you need to pass an array to concat.
So make sure that you put your value to be "pushed" inside a couple of square brackets, and you're good to go.
This will give you the functionality you expected from push()
You can use the + operator to "add" two arrays together, or by passing the arrays to join as parameters to concat().
let arrayAB = arrayA + arrayB;
let arrayCD = concat(arrayC, arrayD);
Note that by using the concat method, you can take advantage of "chaining" commands before and after concat.
Are there any substantial reasons why modifying Array.push() to return the object pushed rather than the length of the new array might be a bad idea?
Of course there is one: Other code will expect Array::push to behave as defined in the specification, i.e. to return the new length. And other developers will find your code incomprehensible if you did redefine builtin functions to behave unexpectedly.
At least choose a different name for the method.
You would then be able to do something like this: someFunction(value, someArray.push({}));
Uh, what? Yeah, my second point already strikes :-)
However, even if you didn't use push this does not get across what you want to do. The composition that you should express is "add an object which consist of a key and a value to an array". With a more functional style, let someFunction return this object, and you can write
var someArray = [],
value = "hello world";
function someFunction(value, obj) {
obj["someKey"] = value;
return obj;
}
someArray.push(someFunction(value, {}));
Just as a historical note -- There was an older version of JavaScript -- JavaScript version 1.2 -- that handled a number of array functions quite differently.
In particular to this question, Array.push did return the item, not the length of the array.
That said, 1.2 has been not been used for decades now -- but some very old references might still refer to this behavior.
http://web.archive.org/web/20010408055419/developer.netscape.com/docs/manuals/communicator/jsguide/js1_2.htm
By the coming of ES6, it is recommended to extend array class in the proper way , then , override push method :
class XArray extends Array {
push() {
super.push(...arguments);
return (arguments.length === 1) ? arguments[0] : arguments;
}
}
//---- Application
let list = [1, 3, 7,5];
list = new XArray(...list);
console.log(
'Push one item : ',list.push(4)
);
console.log(
'Push multi-items :', list.push(-9, 2)
);
console.log(
'Check length :' , list.length
)
Method push() returns the last element added, which makes it very inconvenient when creating short functions/reducers. Also, push() - is a rather archaic stuff in JS. On ahother hand we have spread operator [...] which is faster and does what you needs: it exactly returns an array.
// to concat arrays
const a = [1,2,3];
const b = [...a, 4, 5];
console.log(b) // [1, 2, 3, 4, 5];
// to concat and get a length
const arrA = [1,2,3,4,5];
const arrB = [6,7,8];
console.log([0, ...arrA, ...arrB, 9].length); // 10
// to reduce
const arr = ["red", "green", "blue"];
const liArr = arr.reduce( (acc,cur) => [...acc, `<li style='color:${cur}'>${cur}</li>`],[]);
console.log(liArr);
//[ "<li style='color:red'>red</li>",
//"<li style='color:green'>green</li>",
//"<li style='color:blue'>blue</li>" ]
var arr = [];
var element = Math.random();
assert(element === arr[arr.push(element)-1]);
How about doing someArray[someArray.length]={} instead of someArray.push({})? The value of an assignment is the value being assigned.
var someArray = [],
value = "hello world";
function someFunction(value, obj) {
obj["someKey"] = value;
}
someFunction(value, someArray[someArray.length]={});
console.log(someArray)
I'm doing array manipulation in Javascript, and I want to be able to chain operations with multiple calls to map, concat, etc.
const someAmazingArrayOperation = (list) =>
list
.map(transformStuff)
.sort(myAwesomeSortAlgorithm)
.concat([someSuffixElement])
.precat([newFirstElement])
.filter(unique)
But the problem I've run into is that Array.precat doesn't exist. (Think of Array.concat, but the reverse.)
I don't want to modify Array.prototype in my own code, for reasons. (https://flaviocopes.com/javascript-why-not-modify-object-prototype/)
I could totally use Array.concat and concatenate my array to the end of the prefix array and carry on. But that doesn't chain with the other stuff, and it makes my code look clunky.
It's kind of a minor issue because I can easily write code to get the output I want. But it's kind of a big deal because I want my code to look clean and this seems like a missing piece of the Array prototype.
Is there a way to get what I want without modifying the prototype of a built-in type?
For more about the hypothetical Array.precat, see also:
concat, but prepend instead of append
You could use Array#reduce with a function which takes the initialValue as array for prepending data.
const
precat = (a, b) => [...a, b],
result = [1, 2, 3]
.reduce(precat, [9, 8, 7]);
console.log(result)
If you don't want to modify Array.prototype, you can consider extends:
class AmazingArray extends Array {
precat(...args) {
return new AmazingArray().concat(...args, this);
}
}
const transformStuff = x => 2*x;
const myAwesomeSortAlgorithm = (a, b) => a - b;
const someSuffixElement = 19;
const newFirstElement = -1;
const unique = (x, i, arr) => arr.indexOf(x) === i;
const someAmazingArrayOperation = (list) =>
new AmazingArray()
.concat(list)
.map(transformStuff)
.sort(myAwesomeSortAlgorithm)
.concat([someSuffixElement])
.precat([newFirstElement])
.filter(unique);
console.log(someAmazingArrayOperation([9, 2, 2, 3]));
I don't want to modify Array.prototype in my own code, for reasons.
These reasons are good, but you can sidestep them by using a collision-safe property - key it with a symbol, not a name:
const precat = Symbol('precatenate')
Array.prototype[precat] = function(...args) {
return [].concat(...args, this);
};
const someAmazingArrayOperation = (list) =>
list
.map(transformStuff)
.sort(myAwesomeCompareFunction)
.concat([someSuffixElement])
[precat]([newFirstElement])
.filter(unique);
This is mostly for academic interest, as I've managed to solve it an entirely different way, but, short story, what I want is, in pseudocode:
Foreach object in array1
Find matching otherObject in array2 // must exist and there's only 1
Find matching record in array3 // must exist and there's only 1
If record.status !== otherObject.status
push { otherObject.id, record.status } onto newArray
It intuitively seems to me there should be a way to do something with array1.filter(<some function>).map(<some other function>, but I can't get it to work in practice.
Here's a real-world example. This works:
function update(records) {
const filtered = records.filter((mcr) => {
const match = at._people.find((atr) => atr.email.toLowerCase() ===
mcr.email.toLowerCase());
return (match.subscriberStatus.toLowerCase() !==
mc.mailingList.find((listEntry) =>
listEntry.id === mcr.id).status.toLowerCase()
);
});
const toUpdate = filtered.map((mcr) => {
const match = at._people.find((atr) => atr.email.toLowerCase() ===
mcr.email.toLowerCase());
return ({ 'id': match.id,
'fields': {'Mailing List Status': mcr.subscriberStatus }
}
);
});
}
But what bums me out is the duplicated const match =. It seems to me that those could be expensive if at._people is a large array.
I naively tried:
function update(records) {
let match;
const toUpdate = records.filter((mcr) => {
match = at._people.find((atr) => atr.email.toLowerCase() ===
mcr.email.toLowerCase());
// return isDifferent?
return (match.subscriberStatus.toLowerCase() !==
mc.mailingList.find((listEntry) => listEntry.id === mcr.id).status.toLowerCase());
}).map((foundMcr) => {
return ({ 'id': match.id, 'fields': {'Mailing List Status': foundMcr.subscriberStatus } })
});
}
But (somewhat obviously, in retrospect) this doesn't work, because inside the map, match never changes — it's just always the last thing it was in the filter. Any thoughts on how to pass that match.id found in the filter on an entry-by-entry basis to the chained map? Or, really, any other way to accomplish this same thing?
If you were to only use .map and .filter() then you can avoid extra re-calculations later on in the chain if you do the following (generic steps):
.map() each item into a wrapper object that contains:
item from the array
calculate the extra data you would need in later steps
.filter() the wrapper objects based on the calculated data.
.map() the leftover results into the shape you wish, drawing on the original item and any of the calculated data.
In your case, this can mean that:
You do the find logic once.
Use the found items to discard some of the results.
Use the rest to generate a new array.
Here is the result with the callbacks extracted to make the map/filter/map logic clearer:
//takes a record and enriches it with `match` and `mailingStatus`
const wrapWithLookups = mcr => {
const match = at._people.find((atr) => atr.email.toLowerCase() ===
mcr.email.toLowerCase());
const mailingListStatus = mc.mailingList.find((listEntry) => listEntry.id === mcr.id).status;
return { match, mailingListStatus , mcr };
};
//filters based on match and mailingListStatus calculated fields
const isCorrectSubscriberStatus = ({match, mailingListStatus}) =>
match.subscriberStatus.toLowerCase() !== mailingListStatus .toLowerCase();
//converts to a new item based on mcr and match
const toUpdatedRecord = ({match, mcr}) => ({
'id': match.id,
'fields': {'Mailing List Status': mcr.subscriberStatus }
});
function update(records) {
return records
.map(wrapWithLookups)
.filter(isCorrectSubscriberStatus)
.map(toUpdatedRecord);
}
This saves the re-calculation of match and/or mailingStatus if they are needed later. However, it does introduce an entire new loop through the array just to collect them. This could be a performance concern, however, it is very easily remedied if you use lazy evaluated chain like what is provided by Lodash. The code adjustment to use that would be:
function update(records) {
return _(records) // wrap in a lazy chain evaluator by Lodash ->-+
.map(wrap) // same as before |
.filter(isCorrectSubscriberStatus) // same as before |
.map(toUpdatedRecord) // same as before |
.value(); // extract the value <-----------------------------+
}
Other libraries would likely have a very similar approach. In any case, lazy evaluation does not run once through the array for .map(), then another time for .filter(), then a third time for the second .map() but instead only iterates once and runs the operations as appropriate.
Lazy evaluation can be expressed through a transducer which is built on top of reduce(). For an example of how transducers work see:
How to chain map and filter functions in the correct order
Transducer flatten and uniq
Thus it is possible to avoid all the .map() and .filter() calls by simply doing one combined function and directly use .reduce(). However, I personally find that harder to reason about and more difficulty to maintain, than expressing the logic through a .map().filter().map() chain and then using lazy evaluation, if performance is needed.
Worth noting that the map() -> filter() -> map() logic does not need to used via a lazy chain. You can use a library like the FP distribution of Lodash or the vanilla Ramda that give you generic map() and filter() functions that can be applied to any list and combined with each other to avoid multiple repetitions again. Using Lodash FP this would be:
import map from 'lodash/fp/map';
import filter from 'lodash/fp/filter';
import flow from 'lodash/fp/flow';
function update(records) {
const process = flow(
map(wrapWithLookups),
filter(isCorrectSubscriberStatus),
map(toUpdatedRecord),
);
return process(records);
}
For Ramda the implementation would be the same - map() and filter() act the same across the two libraries, the only difference is that the composition function (flow() in Lodash) is called pipe in Ramda. It acts identically to flow():
pipe(
map(wrapWithLookups),
filter(isCorrectSubscriberStatus),
map(toUpdatedRecord),
)
For a deeper look at why you might want to avoid chaining and the alternative here, see the article on Medium.com: Why using _.chain is a mistake.
Here's how I would do this, if this can help:
let newArray = array1.filter((item) => {
let matching1 = array2.filter(matchingFunction)
let matching2 = array3.filter(matchingFunction)
return matching1?.status == matching2?.status;
})
const validateCred = arr => {
let checkableArr = arr.pop();
for (let i = arr.length - 1; i >= 0; i--) {
arr[i]
checkableArr.push(arr[i])
}
}
When i run the code, I get an error saying that .push() is not a function that I can use on checkableArr. this is because checkableArr isn't an array due to it being a variation of arr (the argument that will be passed when the function is called), which the function isn't sure is an array, is there any way to check that the argument passed into the function is an array?
EDIT:
The thing I was looking for is called isArray(), a method that returns a boolean indicating if the item passed into it is an array or no. Thanks to #David for showing me this tool, along with a bunch of helpful information that helped a lot with writing my program
You're getting that error, because you haven't made sure that the last item of the passed array (arr) is an array itself, but your function's logic requires it to be an array.
There are various ways to solve this, some of them have already been outlined by others (#hungerstar).
Check the last element of arr
One attempt is to ensure that the last element/item inside arr is an array and bail out if it isn't.
const validateCred = arr => {
let lastItem = arr.pop();
if (!Array.isArray(lastItem)) {
throw new Error('validateCred :: Last item of passed array must be an array itself');
}
// ... rest of your code ...
}
Although that does not solve the root cause, it ensures you get a decent and descriptive message about what went wrong. It's possible to improve that by defining a fallback array in case the last item isn't an array itself. Something like this:
const validateCred = arr => {
let lastItem = arr.pop();
let checkableArr = Array.isArray(lastItem) ? lastItem : [];
// ... rest of your code ...
}
One thing to note: If the last item may be an array with a value inside, you have to copy that value into the new array!
const validateCred = arr => {
let lastItem = arr.pop();
let checkableArr = Array.isArray(lastItem) ? lastItem : [lastItem]; // <--
// ... rest of your code ...
}
HINT: The following answer is based on guessing. The name validateCred lets me assume you use it to validate credentials. However, that's just guessing because all the provided code does is taking the last item and then pushing the rest of the contents of arr reversely into it (= reversing and flattening)
Reversing and flattening
If all you want to do with validateCred is reversing and flattening (and you only target supporting environments), you can easily do that with a one-liner:
// impure version
const validateCred = arr => arr.reverse().flat();
// pure version
const validateCred = arr => arr.flat().reverse();
To support older environments as well, you can use .reduce and .concat instead of .flat:
// impure version
const validateCred = arr => arr.reverse().reduce((acc, x) => acc.concat(x), []);
// pure version
const validateCred = arr => arr.reduce((acc, x) => acc.concat(x), []).reverse();
Yes, we pass an array as an argument using call/apply method. In your code when you are using arr.pop() it gets converted to number/string depending upon what type of array you specified, I specified below integer value so checkableArr is now integer so because of this you are getting an error.
Corrected code is here. Just replace in your code like:
let checkableArr = arr; //arr.pop();
Are there any substantial reasons why modifying Array.push() to return the object pushed rather than the length of the new array might be a bad idea?
I don't know if this has already been proposed or asked before; Google searches returned only a myriad number of questions related to the current functionality of Array.push().
Here's an example implementation of this functionality, feel free to correct it:
;(function() {
var _push = Array.prototype.push;
Array.prototype.push = function() {
return this[_push.apply(this, arguments) - 1];
}
}());
You would then be able to do something like this:
var someArray = [],
value = "hello world";
function someFunction(value, obj) {
obj["someKey"] = value;
}
someFunction(value, someArray.push({}));
Where someFunction modifies the object passed in as the second parameter, for example. Now the contents of someArray are [{"someKey": "hello world"}].
Are there any drawbacks to this approach?
See my detailed answer here
TLDR;
You can get the return value of the mutated array, when you instead add an element using array.concat[].
concat is a way of "adding" or "joining" two arrays together. The awesome thing about this method, is that it has a return value of the resultant array, so it can be chained.
newArray = oldArray.concat[newItem];
This also allows you to chain functions together
updatedArray = oldArray.filter((item) => {
item.id !== updatedItem.id).concat[updatedItem]};
Where item = {id: someID, value: someUpdatedValue}
The main thing to notice is, that you need to pass an array to concat.
So make sure that you put your value to be "pushed" inside a couple of square brackets, and you're good to go.
This will give you the functionality you expected from push()
You can use the + operator to "add" two arrays together, or by passing the arrays to join as parameters to concat().
let arrayAB = arrayA + arrayB;
let arrayCD = concat(arrayC, arrayD);
Note that by using the concat method, you can take advantage of "chaining" commands before and after concat.
Are there any substantial reasons why modifying Array.push() to return the object pushed rather than the length of the new array might be a bad idea?
Of course there is one: Other code will expect Array::push to behave as defined in the specification, i.e. to return the new length. And other developers will find your code incomprehensible if you did redefine builtin functions to behave unexpectedly.
At least choose a different name for the method.
You would then be able to do something like this: someFunction(value, someArray.push({}));
Uh, what? Yeah, my second point already strikes :-)
However, even if you didn't use push this does not get across what you want to do. The composition that you should express is "add an object which consist of a key and a value to an array". With a more functional style, let someFunction return this object, and you can write
var someArray = [],
value = "hello world";
function someFunction(value, obj) {
obj["someKey"] = value;
return obj;
}
someArray.push(someFunction(value, {}));
Just as a historical note -- There was an older version of JavaScript -- JavaScript version 1.2 -- that handled a number of array functions quite differently.
In particular to this question, Array.push did return the item, not the length of the array.
That said, 1.2 has been not been used for decades now -- but some very old references might still refer to this behavior.
http://web.archive.org/web/20010408055419/developer.netscape.com/docs/manuals/communicator/jsguide/js1_2.htm
By the coming of ES6, it is recommended to extend array class in the proper way , then , override push method :
class XArray extends Array {
push() {
super.push(...arguments);
return (arguments.length === 1) ? arguments[0] : arguments;
}
}
//---- Application
let list = [1, 3, 7,5];
list = new XArray(...list);
console.log(
'Push one item : ',list.push(4)
);
console.log(
'Push multi-items :', list.push(-9, 2)
);
console.log(
'Check length :' , list.length
)
Method push() returns the last element added, which makes it very inconvenient when creating short functions/reducers. Also, push() - is a rather archaic stuff in JS. On ahother hand we have spread operator [...] which is faster and does what you needs: it exactly returns an array.
// to concat arrays
const a = [1,2,3];
const b = [...a, 4, 5];
console.log(b) // [1, 2, 3, 4, 5];
// to concat and get a length
const arrA = [1,2,3,4,5];
const arrB = [6,7,8];
console.log([0, ...arrA, ...arrB, 9].length); // 10
// to reduce
const arr = ["red", "green", "blue"];
const liArr = arr.reduce( (acc,cur) => [...acc, `<li style='color:${cur}'>${cur}</li>`],[]);
console.log(liArr);
//[ "<li style='color:red'>red</li>",
//"<li style='color:green'>green</li>",
//"<li style='color:blue'>blue</li>" ]
var arr = [];
var element = Math.random();
assert(element === arr[arr.push(element)-1]);
How about doing someArray[someArray.length]={} instead of someArray.push({})? The value of an assignment is the value being assigned.
var someArray = [],
value = "hello world";
function someFunction(value, obj) {
obj["someKey"] = value;
}
someFunction(value, someArray[someArray.length]={});
console.log(someArray)