How to duplicate value in functional js? - javascript

I have couple of functions, first of which is "expensive" getter:
function getter() {
return {
a: "foo",
b: "bar",
c: "should be intentionally skipped"
}
}
Second is transformer, which has a requirement to stay in strictly functional form:
const transformer = x => [getter().a+x, getter().b+x]
Issue is that here are 2 expensive getter calls.
How can I call getter only once, keeping it in fp-form syntax (I particularly mean - without using var, const, let and return inside transformer)?
In other words, what is js fp equivalent of transformer function:
const transformer = (x) => {
const cached = getter()
return [cached.a+x, cached.b+x]
}
console.log(f("_test"))
output:
[ 'foo_test', 'bar_test' ]

keeping it in fp-form syntax - I particularly mean, without using var, const, let and return inside transformer
That is not what functional programming means, not even purely functional programming. You should avoid side effects and keep functions pure so that you gain referential transparency to help understanding the code. It does not mean that you should avoid introducing names in your program. Using const is totally fine! You even use it to declare const transformer.
If you absolutely want to avoid such statements and basically emulate let expressions, you can do
const transformer = x =>
(cached =>
[cached.a+x, cached.b+x]
)( getter() );
And of course, if getter is a pure function, there's no reason to run it every time transformer is called. So just hoist it outside the function body:
const cached = getter();
const transformer = x => [cached.a+x, cached.b+x];

edit the question has been amended operate on a subset of keys in the computationally expensive object.
This amended answer uses Object.entries() to gather keys and values. Before transforming values, the entries are filtered to include only the desired keys...
function getter() {
return {
a: "foo",
b: "bar",
c: "should be intentionally skipped"
}
}
const transformer = x => {
return Object.entries(getter())
.filter(([k, v]) => ['a', 'b'].includes(k))
.map(([k, v]) => v + x);
}
console.log(transformer(" plus x"));

Related

Performing function on object and then manipulating the object in Ramda

I'm struggling with a little bit of ramda logic, which I feel like I've almost got a grasp on, but my brain is just not working properly today.
I have an object:
const thing = {
'name': 'thing',
'value': 1000.0987654321,
'valueAsString': "1000.0987654321",
'otherThings': { 'blah': 'blah' },
}
I want to extract 'name' and 'value' from thing, but I want to round the value before returning my new object.
I know that to extract name and value I can just use pick: R.pick(['name', 'value']) and to perform my rounding function, I can take an existing rounding function:
const roundTo9Dp = (n) => Number((n).toFixed(9))
and apply this to my object like this: R.compose(roundTo9Dp, R.prop('value'))
These two operations work independently:
const picker = R.pick(['name', 'value'])
picker(thing) // => {"name": "thing", "value": 1000.0987654321}
const rounded = R.compose(roundTo9Dp, R.prop('value'))
rounded(thing) // => 1000.098765432
It's when I join them together, I'm struggling. It's like they're operating on 'thing' at different levels, and I'm just struggling to unpick them.
R.compose(picker, R.assoc('value', rounded))(thing) // Incorrect
picker(R.compose(R.assoc('value'), rounded)(thing)(thing)) // works, but is hideous
There are quite a few ways you could do this with Ramda. Here are a few:
const roundTo9Dp = (n) => Number((n).toFixed(9))
const foo1 = applySpec({
name: prop('name'),
value: compose(roundTo9Dp, prop('value'))
})
const foo2 = pipe(
pick (['name', 'value']),
over (lensProp ('value'), roundTo9Dp)
)
const rounded = R.compose(roundTo9Dp, R.prop('value'))
const foo3 = pipe(
pick (['name', 'value']),
chain(assoc('value'), rounded)
)
const foo4 = pipe(
props (['name', 'value']),
zipWith (call, [identity, roundTo9Dp]),
zipObj (['name', 'value'])
)
const thing = {name: 'thing', value: 1000.0987654321, valueAsString: "1000.0987654321", otherThings: {blah: 'blah'}}
console .log ('foo1:', foo1 (thing))
console .log ('foo2:', foo2 (thing))
console .log ('foo3:', foo3 (thing))
console .log ('foo4:', foo4 (thing))
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.27.0/ramda.js"></script>
<script> const {applySpec, prop, compose, pipe, pick, over, lensProp, chain, assoc, props, zipWith, call, identity, zipObj} = R </script>
And we could come up with many more if we tried. foo3 is probably closest to what you were struggling with. chain when applied to functions works like chain (f, g) (x) //=> f (g (x)) (x), which would avoid the ugly (thing) (thing) in your version. This version might teach you some about the world of FantasyLand typeclasses. foo1 uses one of Ramda's more convenient object manipulation functions, applySpec. foo2 uses lensProp and over, which can lead you into the fascinating world of lenses. And foo4, while probably not recommended, shows off zipWith and zipObj, functions used to combine lists.
But unless this is about learning Ramda, I would suggest none of these, as this is simple enough to do without any library in modern JS:
const foo = ({name, value}) =>
({name, value: roundTo9Dp(value)})
I'm one of the founders of Ramda, and I remain a big fan. But I see it as a library to be used when it makes code cleaner and more maintainable. Here, the simplest version doesn't need it.

Ramda.js pipe that sets a property based in a previous parameter

Currently, I have the following code (which works):
const double = R.multiply(2);
const piped = R.pipe(
(obj) => R.assoc('b', double(obj.a))(obj),
(obj) => R.assoc('c', double(obj.b))(obj)
);
console.log(
piped({ a: 1 })
);
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.min.js"></script>
However I think that due to that (obj) at the end of each pipe function, I guess that I could refactor it to something better in the "Ramda world".
I'm still new to this library, so I yet don't know all the methods and tricks.
Is there a better way to do so using Ramda?
My "real" code is this:
function getScripts() {
const tryRequire = tryCatch((path) => require(path).run, always(null));
const addPathAndRunProps = pipe(
// Note: The `scriptsPath` function is a bound path.join function.
// It just returns a string.
(dir) => assoc('path', scriptsPath(dir.name, 'index.js'))(dir),
(dir) => assoc('run', tryRequire(dir.path))(dir)
);
const addModuleRunAndFilterInvalid = pipe(
map(addPathAndRunProps),
filter((dir) => typeof dir.run === 'function')
);
return addModuleRunAndFilterInvalid(
fs.readdirSync(SCRIPTS_PATH, { withFileTypes: true })
);
}
I think you might be over-using Ramda here. The code is a bit confusing. This would likely be easier to read in the future and more maintainable, while still being functional:
function getScripts() {
const tryRequire = tryCatch((path) => require(path).run, always(null));
const addPathAndRunProps = dir => {
const path = scriptsPath(dir.name, 'index.js')
return {
...dir,
path,
run: tryRequire(path),
}
}
return pipe(
map(addPathAndRunProps),
filter(x => typeof x.run === 'function'),
)(fs.readdirSync(SCRIPTS_PATH, { withFileTypes: true }))
}
Or, if you really want to keep those setters, try splitting your addPathAndRunProps function into two setters:
function getScripts() {
const tryRequire = tryCatch((path) => require(path).run, always(null));
const addPathProp = x => assoc('path', scriptsPath(x.name, 'index.js'), x)
const addRunProp = x => assoc('run', tryRequire(x.path), x)
return pipe(
map(addPathProp),
map(addRunProp),
filter(x => typeof x.run === 'function'),
)(fs.readdirSync(SCRIPTS_PATH, { withFileTypes: true }))
}
In both cases, I got rid of your addModuleRunAndFilterInvalid function. It doesn't add any clarity to your function to have addModuleRunAndFilterInvalid split out into its own function, and returning the result of the pipe clarifies the purpose of the getScripts function itself.
Also, in your code, you keep calling the object you're operating on dir. This is confusing since it implies the object has the same structure on each function call. However the variable passed to addRunProp does not have the same structure as what is passed to addPathProp (the one passed to addRunProp requires an extra path prop). Either come up with a descriptive name, or just use x. You can think of x as the thing your function is operating on. To figure out what x is, look at the function name (e.g. addRunProp means that x is something that will have a run property added to it).
One other potentially useful tip: I've settled on the naming convention of aug (short of "augment") for adding a property or bit of info to an object. So I'd rename your addPathProp function augPath and rename your addRunProp function augRun. Since I use it consistently, I know that when I see aug at the beginning of a function, it's adding a property.
I agree with Cully's answer -- there might not be any good reason to try to use Ramda's functions here.
But, if you're interested, there are some Ramda functions which you might choose to use.
chain and ap are fairly generic functions operating on two different abstract types. But when used with functions, they have some fairly useful behavior as combinators:
chain (f, g) (x) //=> f (g (x)) (x)
ap (f, g) (x) //=> f (x) (g (x))
That means that you could write your function like this:
const piped = R.pipe(
chain (assoc ('b'), pipe (prop ('a'), double)),
chain (assoc ('c'), pipe (prop ('b'), double)),
)
I don't think this version improves on the original; the repetition involved in those internal pipe calls is too complex.
However with a helper function, this might be more reasonable:
const doubleProp = curry (pipe (prop, double))
// or doubleProp = (prop) => (obj) => 2 * obj[prop]
const piped = R.pipe(
chain (assoc ('b'), doubleProp ('a')),
chain (assoc ('c'), doubleProp ('b')),
);
This is now, to my mind, pretty readable code. Of course it requires an understanding of chain and how it applies to functions, but with that, I think it's actually an improvement on the original.
I frequently make the point that point-free code is a useful tool only when it makes our code more readable. When it doesn't pointed code is no less functional than point-free.
By the way, I just want to note that I'm impressed with the quality of your question. It's really nice to read questions that are well-thought out and well-presented. Thank you!

Functional Programming - then() between chained filter/map calls

I am parsing data like this:
getData()
.filter(fn)
.filter(fn2)
.filter(fn3)
.map(fn4)
in which the filters are conceptually separated and do different operations.
For debugging purposes, is there a JavaScript library or a way to wrap promises such that I can do this:
getData()
.filter(fn)
.then((result) => { log(result.count); return result })
.filter(fn2)
.then(debugFn) // extra chained debug step (not iterating through arr)
.filter(fn3)
.map(fn4)
Or is this an anti-pattern?
EDIT
After some thoughts I'm convinced that the best answer to this question has been given by V-for-Vaggelis: just use breakpoints.
If you do proper function composition then inserting a few tap calls in your pipeline is cheap, easy and non intrusive but it won't give you as much information than what a breakpoint (and knowing how to use a debugger to step through your code) would.
Applying a function on x and returning x as is, no matter what, already has a name: tap. In libraries like ramda.js, it is described as follow:
Runs the given function with the supplied object, then returns the object.
Since filter, map, ... all return new instances, you probably have no other choice than extending the prototype.
We can find ways to do it in a controlled manner though. This is what I'd suggest:
const debug = (xs) => {
Array.prototype.tap = function (fn) {
fn(this);
return this;
};
Array.prototype.debugEnd = function () {
delete Array.prototype.tap;
delete Array.prototype.debugEnd;
return this;
};
return xs;
};
const a = [1, 2, 3];
const b =
debug(a)
.tap(x => console.log('Step 1', x))
.filter(x => x % 2 === 0)
.tap(x => console.log('Step 2', x))
.map(x => x * 10)
.tap(x => console.log('Step 3', x))
.debugEnd();
console.log(b);
try {
b.tap(x => console.log('WAT?!'));
} catch (e) {
console.log('Array prototype is "clean"');
}
If you can afford a library like Ramda, the safest way (IMHO) would be to introduce tap in your pipeline.
const a = [1, 2, 3];
const transform =
pipe(
tap(x => console.log('Step 1', x))
, filter(x => x % 2 === 0)
, tap(x => console.log('Step 2', x))
, map(x => x * 10)
, tap(x => console.log('Step 2', x))
);
console.log(transform(a));
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.min.js"></script>
<script>const {pipe, filter, map, tap} = R;</script>
Adding functions to built-in object prototypes is controversial, so many people might advise against it. However, if you really want to be able to do what you're asking, that's probably the only option:
Object.defineProperty(Array.prototype, "examine", {
value: function(callback) {
callback.call(this, this);
return this;
}
});
Then you can put .examine(debugFn) calls in the chain of .filter() calls, as you described.
You could monkey-patch Array.prototype, but it's not recommended.
As long as you only use it for debugging:
Array.prototype.debug = function (fn) {
fn(this);
return this;
};
// example usage
[1, 2, 3].map(n = > n * 2).debug(console.log).map(n => n * 3);
It's not a promise - you probably don't need async - but gives you .then-like behaviour.
The main issue here is that you're trying to use the chaining pattern that doesn't scale very well.
a.method().method() does only let you apply functions (methods) that are supported by the prototype of the given context (a in this case).
I'd rather suggest you to take a look at function composition (pipe VS compose). This design pattern doesn't depend on a specific context, hence you can provide behaviour externally.
const asyncPipe = R.pipeWith(R.then);
const fetchWarriors = (length) => Promise.resolve(
Array.from({ length }, (_, n) => n),
);
const battle = asyncPipe([
fetchWarriors,
R.filter(n => n % 2 === 0),
R.filter(n => n / 5 < 30),
R.map(n => n ** n),
R.take(4),
R.tap(list => console.log('survivors are', list)),
]);
/* const survivors = await */ battle(100);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.js"></script>
As you can see from the snippet above, it is not really needed for the Array type to implement everything...
I believe one could use breakpoints to debug something like this.
When you dont want to overwrite the prototype of either you could write a wrapper function that takes a promise and gives you a modified promise that has the additional features you want. However the problem here is that you will need to import all methods that may be used, which is bad for tree-shaking.
The ES6 pipeline operator proposal tries to address this problem.
Until then things like lodashs _.flow remain, that allow you to do this:
_.pipe([
_.filter(fn),
_.filter(fn2),
])(data);
Now you basically want this in an async way. This should be pretty easy to accomplish with tools like Ramda.
You can do what you want pretty easily with rubico, a functional (programming) promise library
import { pipe, tap, map, filter, transform } from 'rubico'
const pipeline = pipe([
getData,
filter(fn),
tap((result) => { log(result.count) }),
filter(fn2),
debugFn,
filter(fn3),
map(fn4),
])
you can use the above pipeline as a transducer (without debugFn for now, since I am not sure the exact nature of what it does) using rubico's transform
transform(pipeline, [])
you are left with an efficient transformation pipeline based off transduction.

Is explicit type passing not equivalent to type inference (in terms of expressiveness)?

I try to translate traverse/sequenceA to Javascript. Now the following behavior of the Haskell implementation gives me trouble:
traverse (\x -> x) Nothing -- yields Nothing
sequenceA Nothing -- yields Nothing
traverse (\x -> x) (Just [7]) -- yields [Just 7]
sequenceA (Just [7]) -- yields [Just 7]
As a Haskell newbie I wonder why the first expression works at all:
instance Traversable Maybe where
traverse _ Nothing = pure Nothing
traverse f (Just x) = Just <$> f x
pure Nothing shouldn't work in this case, since there is no minimal applicative context in which the value could be put in. It seems as if the compiler checks the type of this expression lazily and since mapping the id function over Nothing is a noop, it simply "overlooks" the type error, so to speak.
Here is my Javascript translation:
(Please note, that since Javascirpt's prototype system is insufficient for a couple of type classes and there is no strict type checking anyway, I work with Church encoding and pass type constraints to functions explicitly.)
// minimal type system realized with `Symbol`s
const $tag = Symbol.for("ftor/tag");
const $Option = Symbol.for("ftor/Option");
const Option = {};
// value constructors (Church encoded)
const Some = x => {
const Some = r => {
const Some = f => f(x);
return Some[$tag] = "Some", Some[$Option] = true, Some;
};
return Some[$tag] = "Some", Some[$Option] = true, Some;
};
const None = r => {
const None = f => r;
return None[$tag] = "None", None[$Option] = true, None;
};
None[$tag] = "None";
None[$Option] = true;
// Traversable
// of/map are explicit arguments of traverse to imitate type inference
// tx[$Option] is just duck typing to enforce the Option type
// of == pure in Javascript
Option.traverse = (of, map) => ft => tx =>
tx[$Option] && tx(of(None)) (x => map(Some) (ft(x)));
// (partial) Array instance of Applicative
const map = f => xs => xs.map(f);
const of = x => [x];
// helpers
const I = x => x;
// applying
Option.traverse(of, map) (I) (None) // ~ [None]
Option.traverse(of, map) (I) (Some([7])) // ~ [Some(7)]
Obviously, this translation deviates from the Haskell implementation, because I get a [None] where I should get a None. Honestly, this behavior corresponds precisely to my intuition, but I guess intuition isn't that helpful in functional programming. Now my question is
did I merely make a rookie mistake?
or is explicit type passing not equivalent to type inference (in terms of expressiveness)?
GHCi does not overlook any type error. It defaults an unconstrained Applicative to IO, but you only get this behavior in a GHCi prompt (and not a .hs source file). You can check
> :t pure Nothing
pure Nothing :: Applicative f => f (Maybe b)
But still have
> pure Nothing
Nothing
Your javascript implementation is fine; you passed in an Applicative instance for arrays and got what is expected.

How to map over arbitrary Iterables?

I wrote a reduce function for Iterables and now I want to derive a generic map that can map over arbitrary Iterables. However, I have encountered an issue: Since Iterables abstract the data source, map couldn't determine the type of it (e.g. Array, String, Map etc.). I need this type to invoke the corresponding identity element/concat function. Three solutions come to mind:
pass the identity element/concat function explicitly const map = f => id => concat => xs (this is verbose and would leak internal API though)
only map Iterables that implement the monoid interface (that were cool, but introducing new types?)
rely on the prototype or constructor identity of ArrayIterator,StringIterator, etc.
I tried the latter but isPrototypeOf/instanceof always yield false no matter what a do, for instance:
Array.prototype.values.prototype.isPrototypeOf([].values()); // false
Array.prototype.isPrototypeOf([].values()); // false
My questions:
Where are the prototypes of ArrayIterator/StringIterator/...?
Is there a better approach that solves the given issue?
Edit: [][Symbol.iterator]() and ("")[Symbol.iterator]() seem to share the same prototype:
Object.getPrototypeOf(Object.getPrototypeOf([][Symbol.iterator]())) ====
Object.getPrototypeOf(Object.getPrototypeOf(("")[Symbol.iterator]()))
A distinction by prototypes seems not to be possible.
Edit: Here is my code:
const values = o => keys(o).values();
const next = iter => iter.next();
const foldl = f => acc => iter => {
let loop = (acc, {value, done}) => done
? acc
: loop(f(acc) (value), next(iter));
return loop(acc, next(iter));
}
// static `map` version only for `Array`s - not what I desire
const map = f => foldl(acc => x => [...acc, f(x)]) ([]);
console.log( map(x => x + x) ([1,2,3].values()) ); // A
console.log( map(x => x + x) (("abc")[Symbol.iterator]()) ); // B
The code in line A yields the desired result. However B yields an Array instead of String and the concatenation only works, because Strings and Numbers are coincidentally equivalent in this regard.
Edit: There seems to be confusion for what reason I do this: I want to use the iterable/iterator protocol to abstract iteration details away, so that my fold/unfold and derived map/filter etc. functions are generic. The problem is, that you can't do this without also having a protocol for identity/concat. And my little "hack" to rely on prototype identity didn't work out.
#redneb made a good point in his response and I agree with him that not every iterable is also a "mappable". However, keeping that in mind I still think it is meaningful - at least in Javascript - to utilize the protocol in this way, until maybe in future versions there is a mappable or collection protocol for such usage.
I have not used the iterable protocol before, but it seems to me that it is essentially an interface designed to let you iterate over container objects using a for loop. The problem is that you are trying to use that interface for something that it was not designed for. For that you would need a separate interface. It is conceivable that an object might be "iterable" but not "mappable". For example, imagine that in an application we are working with binary trees and we implement the iterable interface for them by traversing them say in BFS order, just because that order makes sense for this particular application. How would a generic map work for this particular iterable? It would need to return a tree of the "same shape", but this particular iterable implementation does not provide enough information to reconstruct the tree.
So the solution to this is to define a new interface (call it Mappable, Functor, or whatever you like) but it has to be a distinct interface. Then, you can implement that interface for types that makes sense, such as arrays.
Pass the identity element/concat function explicitly const map = f => id => concat => xs
Yes, this is almost always necessary if the xs parameter doesn't expose the functionality to construct new values. In Scala, every collection type features a builder for this, unfortunately there is nothing in the ECMAScript standard that matches this.
only map Iterables that implement the monoid interface
Well, yes, that might be one way to got. You don't even need to introduce "new types", a standard for this already exists with the Fantasyland specification. The downsides however are
most builtin types (String, Map, Set) don't implement the monoid interface despite being iterable
not all "mappables" are even monoids!
On the other hand, not all iterables are necessarily mappable. Trying to write a map over arbitrary iterables without falling back to an Array result is doomed to fail.
So rather just look for the Functor or Traversable interfaces, and use them where they exist. They might internally be built on an iterator, but that should not concern you. The only thing you might want to do is to provide a generic helper for creating such iterator-based mapping methods, so that you can e.g. decorate Map or String with it. That helper might as well take a builder object as a parameter.
rely on the prototype or constructor identity of ArrayIterator, StringIterator, etc.
That won't work, for example typed arrays are using the same kind of iterator as normal arrays. Since the iterator does not have a way to access the iterated object, you cannot distinguish them. But you really shouldn't anyway, as soon as you're dealing with the iterator itself you should at most map to another iterator but not to the type of iterable that created the iterator.
Where are the prototypes of ArrayIterator/StringIterator/...?
There are no global variables for them, but you can access them by using Object.getPrototypeOf after creating an instance.
You could compare the object strings, though this is not fool proof as there have been known bugs in certain environments and in ES6 the user can modify these strings.
console.log(Object.prototype.toString.call(""[Symbol.iterator]()));
console.log(Object.prototype.toString.call([][Symbol.iterator]()));
Update: You could get more reliable results by testing an iterator's callability of an object, it does require a fully ES6 spec compliant environment. Something like this.
var sValues = String.prototype[Symbol.iterator];
var testString = 'abc';
function isStringIterator(value) {
if (value === null || typeof value !== 'object') {
return false;
}
try {
return value.next.call(sValues.call(testString)).value === 'a';
} catch (ignore) {}
return false;
}
var aValues = Array.prototype.values;
var testArray = ['a', 'b', 'c'];
function isArrayIterator(value) {
if (value === null || typeof value !== 'object') {
return false;
}
try {
return value.next.call(aValues.call(testArray)).value === 'a';
} catch (ignore) {}
return false;
}
var mapValues = Map.prototype.values;
var testMap = new Map([
[1, 'MapSentinel']
]);
function isMapIterator(value) {
if (value === null || typeof value !== 'object') {
return false;
}
try {
return value.next.call(mapValues.call(testMap)).value === 'MapSentinel';
} catch (ignore) {}
return false;
}
var setValues = Set.prototype.values;
var testSet = new Set(['SetSentinel']);
function isSetIterator(value) {
if (value === null || typeof value !== 'object') {
return false;
}
try {
return value.next.call(setValues.call(testSet)).value === 'SetSentinel';
} catch (ignore) {}
return false;
}
var string = '';
var array = [];
var map = new Map();
var set = new Set();
console.log('string');
console.log(isStringIterator(string[Symbol.iterator]()));
console.log(isArrayIterator(string[Symbol.iterator]()));
console.log(isMapIterator(string[Symbol.iterator]()));
console.log(isSetIterator(string[Symbol.iterator]()));
console.log('array');
console.log(isStringIterator(array[Symbol.iterator]()));
console.log(isArrayIterator(array[Symbol.iterator]()));
console.log(isMapIterator(array[Symbol.iterator]()));
console.log(isSetIterator(array[Symbol.iterator]()));
console.log('map');
console.log(isStringIterator(map[Symbol.iterator]()));
console.log(isArrayIterator(map[Symbol.iterator]()));
console.log(isMapIterator(map[Symbol.iterator]()));
console.log(isSetIterator(map[Symbol.iterator]()));
console.log('set');
console.log(isStringIterator(set[Symbol.iterator]()));
console.log(isArrayIterator(set[Symbol.iterator]()));
console.log(isMapIterator(set[Symbol.iterator]()));
console.log(isSetIterator(set[Symbol.iterator]()));
<script src="https://cdnjs.cloudflare.com/ajax/libs/es6-shim/0.35.1/es6-shim.js"></script>
Note: included ES6-shim because Chrome does not currently support Array#values
I know this question was posted quite a while back, but take a look at
https://www.npmjs.com/package/fluent-iterable
It supports iterable maps along with ~50 other methods.
Using iter-ops library, you can apply any processing logic, while iterating only once:
import {pipe, map, concat} from 'iter-ops';
// some arbitrary iterables:
const iterable1 = [1, 2, 3];
const iterable2 = 'hello'; // strings are also iterable
const i1 = pipe(
iterable1,
map(a => a * 2)
);
console.log([...i1]); //=> 2, 4, 6
const i2 = pipe(
iterable1,
map(a => a * 3),
concat(iterable2)
);
console.log([...i2]); //=> 3, 6, 9, 'h', 'e', 'l', 'l', 'o'
There's a plethora of operators in the library that you can use with iterables.
There's no clean way to do this for arbitrary iterable. It is possible to create a map for built-in iterables and refer to it.
const iteratorProtoMap = [String, Array, Map, Set]
.map(ctor => [
Object.getPrototypeOf((new ctor)[Symbol.iterator]()),
ctor]
)
.reduce((map, entry) => map.set(...entry), new Map);
function getCtorFromIterator(iterator) {
return iteratorProtoMap.get(Object.getPrototypeOf(iterator));
}
With a possibility of custom iterables an API for adding them can also be added.
To provide a common pattern for concatenating/constructing a desired iterable a callback can be provided for the map instead of a constructor.

Categories