This is a function which deep-flattens an array
const deepFlatten = (input) => {
let result = [];
input.forEach((val, index) => {
if (Array.isArray(val)) {
result.push(...deepFlatten(val));
} else {
result.push(val);
}
});
return result;
};
During a discussion, I was told it is not memory efficient, as it might cause stack overflows.
I read in http://2ality.com/2015/06/tail-call-optimization.html that I could potentially re-write it so that it is TCO-ed.
How would that look like and how could I measure it's memory usage profile?
tail calls in general
I've shared another functional approach to flattening arrays in JavaScript; I think that answer shows a better way to solve this particular problem, but not all functions can be decomposed so nicely. This answer will focus on tail calls in recursive functions, and tail calls in general
In general, to move the recurring call into tail position, an auxiliary function (aux below) is created where the parameters of the function holds all the necessary state to complete that step of the computation
const flattenDeep = arr =>
{
const aux = (acc, [x,...xs]) =>
x === undefined
? acc
: Array.isArray (x)
? aux (acc, x.concat (xs))
: aux (acc.concat (x), xs)
return aux ([], arr)
}
const data =
[0, [1, [2, 3, 4], 5, 6], [7, 8, [9]]]
console.log (flattenDeep (data))
// [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ]
js doesn't really have tail call elimination
However, most JavaScript implementations still don't support tail calls - you'll have to approach this differently if you want to use recursion in your program and not worry about blowing the stack - this is also something I've already written a lot about, too
My current go-to is the clojure-style loop/recur pair because it gives you stack safety while simultaneously affording your program to be written using a beautiful, pure expression
const recur = (...values) =>
({ type: recur, values })
const loop = f =>
{
let acc = f ()
while (acc && acc.type === recur)
acc = f (...acc.values)
return acc
}
const flattenDeep = arr =>
loop ((acc = [], [x,...xs] = arr) =>
x === undefined
? acc
: Array.isArray (x)
? recur (acc, x.concat (xs))
: recur (acc.concat (x), xs))
let data = []
for (let i = 2e4; i>0; i--)
data = [i, data]
// data is nested 20,000 levels deep
// data = [1, [2, [3, [4, ... [20000, []]]]]] ...
// stack-safe !
console.log (flattenDeep (data))
// [ 1, 2, 3, 4, ... 20000 ]
an important position
why is tail position so important anyway? well have you ever thought about that return keyword? That's the way out of your function; and in a strictly-evaluated language like JavaScript, return <expr> means everything in expr needs to be computed before we can send the result out.
If expr contains a sub-expression with function calls that are not in tail position, those calls will introduce a new frame, compute an intermediate value, and then return it to the calling frame for the tail call – which is why the stack can overflow if there's no way to identify when it's safe to remove a stack frame
Anyway, it's hard to talk about programming, so hopefully this small sketch helps identify calling positions in some common functions
const add = (x,y) =>
// + is in tail position
x + y
const sq = x =>
// * is in tail position
x * x
const sqrt = x =>
// Math.sqrt is in tail position
Math.sqrt (x)
const pythag = (a,b) =>
// sqrt is in tail position
// sq(a) and sq(b) must *return* to compute add
// add must *return* to compute sqrt
sqrt (add (sq (a), sq (b)))
// console.log displays the correct value becaust pythag *returns* it
console.log (pythag (3,4)) // 5
Stick with me here for a minute – now imagine there was no return values – since a function has no way to send a value back to the caller, of course we could easily reason that all frames can be immediately discarded after the function has been evaluated
// instead of
const add = (x,y) =>
{ return x + y }
// no return value
const add = (x,y) =>
{ x + y }
// but then how do we get the computed result?
add (1,2) // => undefined
continuation passing style
Enter Continuation Passing Style – by adding a continuation parameter to each function, it's as if we invent our very own return mechanism
Don't get overwhelmed by the examples below – most people have already seen continuation passing style in the form of these misunderstood things called callbacks
// jQuery "callback"
$('a').click (event => console.log ('click event', event))
// node.js style "callback"
fs.readFile ('entries.txt', (err, text) =>
err
? console.error (err)
: console.log (text))
So that's how you work with the computed result – you pass it to a continuation
// add one parameter, k, to each function
// k makes *return* into a normal function
// note {}'s are used to suppress the implicit return value of JS arrow functions
const add = (x,y,k) =>
{ k (x + y) }
const sq = (x,k) =>
{ k (x * x) }
const sqrt = (x,k) =>
{ k (Math.sqrt (x)) }
const pythag = (a,b,k) =>
// sq(a) is computed, $a is the result
sq (a, $a => {
// sq(b) is computed, $b is the result
sq (b, $b => {
// add($a,$b) is computed, $sum is the result
add ($a, $b, $sum => {
// sqrt ($sum) is computed, conintuation k is passed thru
sqrt ($sum, k) }) }) })
// here the final continuation is to log the result
// no *return* value was used !
// no reason to keep frames in the stack !
pythag (3, 4, $c => { console.log ('pythag', $c) })
how to get a value out?
This famous question: How do I return the response from an asynchronous call? has baffled millions of programmers – only, it really has nothing to do with "an asynchronous call" and everything to do with continuations and whether those continuations return anything
// nothing can save us...
// unless pythag *returns*
var result = pythag (3,4, ...)
console.log (result) // undefined
Without a return value, you must use a continuation to move the value to the next step in the computation – this can't be the first way I've tried to say that ^^
but everything is in tail position !
I know it might be hard to tell just by looking at it, but every function has exactly one function call in tail position – if we restore the return functionality in our functions, the value of call 1 is the value of call 2 is the value of call 3, etc – there's no need to introduce a new stack frame for subsequent calls in this situation – instead, call 1's frame can be re-used for call 2, and then re-used again for call 3; and we still get to keep the return value !
// restore *return* behaviour
const add = (x,y,k) =>
k (x + y)
const sq = (x,k) =>
k (x * x)
const sqrt = (x,k) =>
k (Math.sqrt (x))
const pythag = (a,b,k) =>
sq (a, $a =>
sq (b, $b =>
add ($a, $b, $sum =>
sqrt ($sum, k))))
// notice the continuation returns a value now: $c
// in an environment that optimises tail calls, this would only use 1 frame to compute pythag
const result =
pythag (3, 4, $c => { console.log ('pythag', $c); return $c })
// sadly, the environment you're running this in likely took almost a dozen
// but hey, it works !
console.log (result) // 5
tail calls in general; again
this conversion of a "normal" function to a continuation passing style function can be a mechanical process and done automatically – but what's the real point of putting everything into tail position?
Well if we know that frame 1's value is the value of frame 2's, which is the value of frame 3's, and so on, we can collapse the stack frames manually use a while loop where the computed result is updated in-place during each iteration – a function utilising this technique is called a trampoline
Of course trampolines are most often talked about when writing recursive functions because a recursive function could "bounce" (spawn another function call) many times; or even indefinitely – but that doesn't mean we can't demonstrate a trampoline on our pythag function that would only spawn a few calls
const add = (x,y,k) =>
k (x + y)
const sq = (x,k) =>
k (x * x)
const sqrt = (x,k) =>
k (Math.sqrt (x))
// pythag now returns a "call"
// of course each of them are in tail position ^^
const pythag = (a,b,k) =>
call (sq, a, $a =>
call (sq, b, $b =>
call (add, $a, $b, $sum =>
call (sqrt, $sum, k))))
const call = (f, ...values) =>
({ type: call, f, values })
const trampoline = acc =>
{
// while the return value is a "call"
while (acc && acc.type === call)
// update the return value with the value of the next call
// this is equivalent to "collapsing" a stack frame
acc = acc.f (...acc.values)
// return the final value
return acc
}
// pythag now returns a type that must be passed to trampoline
// the call to trampoline actually runs the computation
const result =
trampoline (pythag (3, 4, $c => { console.log ('pythag', $c); return $c }))
// result still works
console.log (result) // 5
why are you showing me all of this?
So even tho our environment doesn't support stack-safe recursion, as long as we keep everything in tail position and use our call helper, we can now convert any stack of calls into a loop
// doesn't matter if we have 4 calls, or 1 million ...
trampoline (call (... call (... call (...))))
In the first code example, I showed using an auxiliary loop, but I also used a pretty clever (albeit inefficient) loop that didn't require deep recurring into the data structure – sometimes that's not always possible; eg, sometimes your recursive function might spawn 2 or 3 recurring calls – what to do then ?
Below I'm going to show you flatten as a naive, non-tail recursive procedure – what's important to note here is that one branch of the conditional results in two recurring calls to flatten – this tree-like recurring process might seem hard to flatten into an iterative loop at first, but a careful, mechanical conversion to continuation passing style will show this technique can work in almost any (if not all) scenarios
[ DRAFT ]
// naive, stack-UNSAFE
const flatten = ([x,...xs]) =>
x === undefined
? []
: Array.isArray (x)
// two recurring calls
? flatten (x) .concat (flatten (xs))
// one recurring call
: [x] .concat (flatten (xs))
Continuation passing style
// continuation passing style
const flattenk = ([x,...xs], k) =>
x === undefined
? k ([])
: Array.isArray (x)
? flattenk (x, $x =>
flattenk (xs, $xs =>
k ($x.concat ($xs))))
: flattenk (xs, $xs =>
k ([x].concat ($xs)))
Continuation passing style with trampoline
const call = (f, ...values) =>
({ type: call, f, values })
const trampoline = acc =>
{
while (acc && acc.type === call)
acc = acc.f (...acc.values)
return acc
}
const flattenk = ([x,...xs], k) =>
x === undefined
? call (k, [])
: Array.isArray (x)
? call (flattenk, x, $x =>
call (flattenk, xs, $xs =>
call (k, $x.concat ($xs))))
: call (flattenk, xs, $xs =>
call (k, ([x].concat ($xs))))
const flatten = xs =>
trampoline (flattenk (xs, $xs => $xs))
let data = []
for (let i = 2e4; i>0; i--)
data = [i, data];
console.log (flatten (data))
wups, you accidentally discovered monads
[ DRAFT ]
// yours truly, the continuation monad
const cont = x =>
k => k (x)
// back to functions with return values
// notice we don't need the additional `k` parameter
// but this time wrap the return value in a continuation, `cont`
// ie, `cont` replaces *return*
const add = (x,y) =>
cont (x + y)
const sq = x =>
cont (x * x)
const sqrt = x =>
cont (Math.sqrt (x))
const pythag = (a,b) =>
// sq(a) is computed, $a is the result
sq (a) ($a =>
// sq(b) is computed, $b is the result
sq (b) ($b =>
// add($a,$b) is computed, $sum is the result
add ($a, $b) ($sum =>
// sqrt ($sum) is computed, a conintuation is returned
sqrt ($sum))))
// here the continuation just returns whatever it was given
const $c =
pythag (3, 4) ($c => $c)
console.log ($c)
// => 5
delimited continuations
[ DRAFT ]
const identity = x =>
x
const cont = x =>
k => k (x)
// reset
const reset = m =>
k => m (k)
// shift
const shift = f =>
k => f (x => k (x) (identity))
const concatMap = f => ([x,...xs]) =>
x === undefined
? [ ]
: f (x) .concat (concatMap (f) (xs))
// because shift returns a continuation, we can specialise it in meaningful ways
const amb = xs =>
shift (k => cont (concatMap (k) (xs)))
const pythag = (a,b) =>
Math.sqrt (Math.pow (a, 2) + Math.pow (b, 2))
const pythagTriples = numbers =>
reset (amb (numbers) ($x =>
amb (numbers) ($y =>
amb (numbers) ($z =>
// if x,y,z are a pythag triple
pythag ($x, $y) === $z
// then continue with the triple
? cont ([[ $x, $y, $z ]])
// else continue with nothing
: cont ([ ])))))
(identity)
console.log (pythagTriples ([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ]))
// [ [ 3, 4, 5 ], [ 4, 3, 5 ], [ 6, 8, 10 ], [ 8, 6, 10 ] ]
You can't optimize it when your recursive call is inside forEach, because in order to apply TCO, the compiler need to check that you not saving a "state" of the previous call. In case of forEach you do save a "state" of the current position.
In order to implement it with TCO you can rewrite that foreach to be implemented with the recursive call, it would look something like that:
function deepFlattenTCO(input) {
const helper = (first, rest, result) => {
if (!Array.isArray(first)) {
result.push(first);
if (rest.length > 0) {
return helper(rest, [], result);
} else {
return result;
}
} else {
const [newFirst, ...newRest] = first.concat(rest);
return helper(newFirst, newRest, result);
}
};
return helper(input, [], []);
}
console.log(deepFlattenTCO([
[1], 2, [3], 4, [5, 6, [7]]
]));
You can see that in each return the only operation that is executed is the recursive call, so, you don't save "state" between recursive calls, therefore the compiler will apply the optimization.
Recursive functions are elegantly expressed, and tail recursion optimization can even prevent them from blowing the stack.
However, any recursive function can be converted to an uglier iterator based solution, which might be beautiful only in its memory consumption and performance, though not to look at.
See: Iterative solution for flattening n-th nested arrays in Javascript
and perhaps this test of different approaches: https://jsperf.com/iterative-array-flatten/2
Related
This is the question.
Given an integer n, return an array ans of length n + 1 such that for each i (0 <= i <= n), ans[i] is the number of 1's in the binary representation of i.
https://leetcode.com/problems/counting-bits/
And this is my solution below.
If the input is 2, expected output should be [0,1,1] but I keep getting [0,2,2]. Why is that???
var countBits = function(n) {
//n=3. [0,1,2,3]
var arr=[0];
for (var i=1; i<=n; i++){
var sum = 0;
var value = i;
while(value != 0){
sum += value%2;
value /= 2;
}
arr.push(sum);
}
return arr;
};
console.log(countBits(3));
You're doing way too much work.
Suppose b is the largest power of 2 corresponding to the first bit in i. Evidently, i has exactly one more 1 in its binary representation than does i - b. But since you're generating the counts in order, you've already worked out how many 1s there are in i - b.
The only trick is how to figure out what b is. And to do that, we use another iterative technique: as you list numbers, b changes exactly at the moment that i becomes twice the previous value of b:
const countBits = function(n) {
let arr = [0], bit = 1;
for (let i = 1; i <= n; i++){
if (i == bit + bit) bit += bit;
arr.push(arr[i - bit] + 1);
}
return arr;
};
console.log(countBits(20));
This technique is usually called "dynamic programming". In essence, it takes a recursive definition and computes it bottom-up: instead of starting at the desired argument and recursing down to the base case, it starts at the base and then computes each intermediate value which will be needed until it reaches the target. In this case, all intermediate values are needed, saving us from having to think about how to compute only the minimum number of intermediate values necessary.
Think of it this way: if you know how many ones are there in a number X, then you immediately know how many ones are there in X*2 (the same) and X*2+1 (one more). Since you're processing numbers in order, you can just push both derived counts to the result and skip to the next number:
let b = [0, 1]
for (let i = 1; i <= N / 2; i++) {
b.push(b[i])
b.push(b[i] + 1)
}
Since we push two numbers at once, the result will be one-off for even N, you have to pop the last number afterwards.
use floor():
sum += Math.floor(value%2);
value = Math.floor(value/2);
I guess your algorithm works for some typed language where integers division results in integers
Here's a very different approach, using the opposite of a fold (such as Array.prototype.reduce) typically called unfold. In this case, we start with a seed array, perform some operation on it to yield the next value, and recur, until we decide to stop.
We write a generic unfold and then use it with a callback which accepts the entire array we've found so far plus next and done callbacks, and then chooses whether to stop (if we've reached our limit) or continue. In either case, it calls one of the two callbacks.
It looks like this:
const _unfold = (fn, init) =>
fn (init, (x) => _unfold (fn, [...init, x]), () => init)
// Number of 1's in the binary representation of each integer in [`0 ... n`]
const oneBits = (n) => _unfold (
(xs, next, done) => xs .length < n ? next (xs .length % 2 + xs [xs .length >> 1]) : done(),
[0]
)
console .log (oneBits (20))
I have a GitHub Gist which shows a few more examples of this pattern.
An interesting possible extension would be to encapsulate the handling of the array-to--length-n bit, and make this function trivial. That's no the only use of such an _unfold, but it's probably a common one. It could look like this:
const _unfold = (fn, init) =>
fn (init, (x) => _unfold (fn, [...init, x]), () => init)
const unfoldN = (fn, init) => (n) => _unfold (
(xs, next, done) => xs .length < n ? next (fn (xs)) : done (),
init
)
const oneBits = unfoldN (
(xs) => xs .length % 2 + xs [xs .length >> 1],
[0]
)
console .log (oneBits (20))
Here we have two helper functions that make oneBits quite trivial to write. And those helpers have many potential uses.
I would like to perform some updates to an array in an object, and then calculate another parameter based on this update. This is what I tried:
import * as R from 'ramda'
const obj = {
arr: [
2,
3
],
result: {
sumOfDoubled: 0
}
};
const double = a => {
return a*2;
}
const arrLens = R.lensProp('arr');
const res0sumOfDblLens = R.lensPath(['result','sumOfDoubled']);
const calc = R.pipe(
R.over(arrLens,R.map(double)),
R.view(arrLens),
R.sum,
R.set(res0sumOfDblLens)
);
const updatedObjA = calc(obj);
const updatedObjB = R.set(res0sumOfDblLens,R.sum(R.view(arrLens,R.over(arrLens,R.map(double),obj))),obj);
// what I want: {"arr":[4,6],"result":{"sumOfDoubled":10}}
console.log(JSON.stringify(obj)); //{"arr":[2,3],"result":{"sumOfDoubled":0}}, as expected
console.log(JSON.stringify(updatedObjA)); //undefined
console.log(JSON.stringify(updatedObjB)); //{"arr":[2,3],"result":{"sumOfDoubled":10}}, correct result but the array did not update
I realise that neither approaches will work; approach A boils down to R.set(res0sumOfDblLens,10), which makes no sense as it doesn't have a target object for the operation. Approach B, on the other hand, manipulates the base object twice rather than passing the result of the first manipulation as an input for the second.
How can I achieve this using only one function composition; i.e. apply the double() function to one part of the object, and then passing that updated object as input for calculating sumOfDoubled?
As well as OriDrori's converge solution, you could also use either of two other Ramda functions. I always prefer lift to converge when it works; it feels more like standard FP, where converge is very much a Ramda artifact. It doesn't always do the job because of some of the variadic features of converge. But it does here, and you could write:
const calc = pipe (
over (arrLens, map (multiply (2))),
lift (set (res0sumOfDblLens) ) (
pipe (view (arrLens), sum),
identity
)
)
But that identity in either of these solutions makes me wonder if there's something better. And there is. Ramda's chain when applied to functions is what's sometimes known as the starling combinator, :: (a -> b -> c) -> (a -> b) -> a -> c. Or said a different way, chain (f, g) //~> (x) => f (g (x)) (x). And that's just what we want to apply here. So with chain, this is simplified further:
const arrLens = lensProp('arr')
const res0sumOfDblLens = lensPath(['result', 'sumOfDoubled'])
const calc = pipe (
over (arrLens, map (multiply (2))),
chain (
set (res0sumOfDblLens),
pipe (view (arrLens), sum)
)
)
const obj = { arr: [2, 3], result: { sumOfDoubled: 0 }}
console .log (calc (obj))
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.js"></script>
<script>const {lensProp, lensPath, pipe, over, map, multiply, chain, set, view, sum} = R </script>
To get the updated value, and the object, so you can set the new sum, you can use R.converge():
const arrLens = R.lensProp('arr');
const res0sumOfDblLens = R.lensPath(['result', 'sumOfDoubled']);
const calc = R.pipe(
R.over(arrLens, R.map(R.multiply(2))),
R.converge(R.set(res0sumOfDblLens), [
R.pipe(R.view(arrLens), R.sum),
R.identity
])
);
const obj = { arr: [2, 3], result: { sumOfDoubled: 0 }};
const result = calc(obj);
console.log(result);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.js"></script>
Maybe a variant without a lense would be a better fit for your case?
const doubleArr = pipe(
path(['arr']),
map(x => x*2)
)
const newData = applySpec({
arr: doubleArr,
result: {
sumOfDoubled: pipe(
doubleArr,
sum
)
}
})
I'm trying to play a bit and implementing a memoization pattern for multiple values using JavaScript.
I managed to write the code for single value:
var lazy = {};
lazy.memoization = evaluator => {
const cache = new Map;
return key => {
cache.has(key) || cache.set(key, evaluator(key));
return cache.get(key);
};
};
var memoize = lazy.memoization(key => console.log('computing ', key));
memoize(1); // -> computing 1
memoize(2); // -> computing 2
Here is the version for multiple keys and it does not work as expected, it just outputs 'computing Array []', 'computing undefined':
var lazy = {};
lazy.memoization = evaluator => {
const cash = new Map;
return (...keys) => {
var values = [];
keys.reduce((v, f) => {
if (!cash.has(v)) {
cash.set(v, evaluator(v));
}
values.push(cash.get(v));
}, values);
return values;
};
};
var memoizeMultiple = lazy.memoization((...keys) => {
keys.forEach(key => console.log('computing ', key))
});
memoizeMultiple(1, 2);
What is wrong here?
There are a bunch of problems with your code. First off, reduce is a kind of fold, which means it usually is used to "collapse" a data structure into a single value. In order to do this, the function passed into reduce gets the accumulation value first and each value inside the data structure second.
const sumOf = ns => ns.reduce((sum, num) => sum + num, 0);
sumOf([1, 2, 3, 4, 5]); // -> 15
In this example, the data structure is an Array, which holds Number values. reduce is used to collapse all numbers in the array into a final value (by summing them up). The collapsing function is called a reducer function (it does addition in this example). Finally, the 0 passed into reduce is the seed value.
Let's track down what happens step by step:
In the first iteration, the reducer function is passed the seed value and the first number in the array. It therefor looks like:
(0, 1) => 0 + 1
The second iteration starts with the result from the first iteration as the accumulator value and the second number in the array:
(1, 2) => 1 + 2
So in complete, it works like this:
(0, 1) => 0 + 1
(1, 2) => 1 + 2
(3, 3) => 3 + 3
(6, 4) => 6 + 4
(10, 5) => 10 + 5
After the last iteration, reduce returns the final accumulator, which is 15 in this example.
OK, back to the code you provided. Your "multi arguments memoization" version uses reduce, but the reducer function doesn't return the intermediate results as a new accumulator value and you don't return the final result reduce produces.
Another problem is your evaluator function. The value it returns is stored inside the caching Map instance. In your code, it doesn't return anything other than undefined. Therefor, undefined is stored and returned on subsequent calls to the memoized function.
If we address these issues, it works:
var lazy = {};
lazy.memoization = evaluator => {
const cache = new Map();
return (...args) => {
return args.reduce((acc, arg) => {
if (!cache.has(arg)) {
cache.set(arg, evaluator(arg)); // stores the returned value inside the cache variable
}
return acc.concat(cache.get(arg));
}, []); // the result should be an array, so use that as the seed
};
};
var memoizeMultiple = lazy.memoization(value => {
console.log('computing ', value)
return value; // you have to return something in here, because the return value is stored
});
memoizeMultiple(1, 2);
I hope this clarifies things a bit. Happy coding!
So I have a sample code for a function the goes through an array and all of it's sub-arrays by recursion, and counts the number of matching found strings.
sample array:
const array = [
'something',
'something',
[
'something',
'something'
],
'something',
[
'something',
[
'something',
'something'
],
'something',
[
[
'something',
'something',
[
'anything'
]
],
'something',
],
[
'something',
[
'something',
'something',
'something',
[
'anything',
'something',
[
'something',
'anything'
]
]
]
]
],
'anything'
];
My function can go through this array and count the number of "something"'s in it.
This code works well:
let somethingsFound = 0;
const searchArrays = (array, stringMatch) => {
for(let i=0;i<array.length;i++){
const item = array[i];
if((typeof item) === 'string'){
(item === stringMatch) ? somethingsFound++ : undefined;
} else if ((typeof item) === 'object'){
searchArrays(item, stringMatch);
}
}
}
searchArrays(array, 'something');
console.log(`Found ${somethingsFound} somethings`);
console output:
>Found 18 somethings
However
Here is the part I don't understand and need some explanation about. If i remove the let declaration on the forloop variable i and just implicitly declare it i=0;<array.length;i++ then my function goes into infinite recursion. I checked it by putting a console.log("running search) statement and saw that.
What does the let do exactly in this situation? I have tried reading about it but couldn't quite understand what's going on, and how exactly the recursion and forloop interact.
here's the failing block of code just in case, all that differs is that let declartion:
let somethingsFound = 0;
const searchArrays = (array, stringMatch) => {
for(i=0;i<array.length;i++){
const item = array[i];
if((typeof item) === 'string'){
(item === stringMatch) ? somethingsFound++ : undefined;
} else if ((typeof item) === 'object'){
searchArrays(item, stringMatch);
}
}
}
searchArrays(array, 'something');
console.log(`Found ${somethingsFound} somethings`);
Thanks! CodeAt30
...and just implicitly declare it i=0;
That's not a declaration, it's just creating an implicit global*. Since it's a global, it's shared by all of the calls to searchArrays, so your outer calls are ending prematurely (because i has been incremented by your inner calls).
Example:
function withDeclaration(recurse) {
for (let i = 0; i < 3; ++i) {
console.log((recurse ? "Outer " : "Inner ") + i);
if (recurse) {
withDeclaration(false);
}
}
}
function withoutDeclaration(recurse) {
for (i = 0; i < 3; ++i) {
console.log((recurse ? "Outer " : "Inner ") + i);
if (recurse) {
withoutDeclaration(false);
}
}
}
console.log("With declaration:");
withDeclaration(true);
console.log("Without declaration:");
withoutDeclaration(true);
.as-console-wrapper {
max-height: 100% !important;
}
The moral of the story: Never rely on implicit globals. Declare your variables, in the innermost scope in which you need them. Use "use strict" to make implicit global creation an error.
* (that's a post on my anemic little blog)
When you use the "implicitly declared" variable, you will only have one such variable, independent of where you are in the recursion tree. It will be one global variable.
That will effectively destroy the logic of your algorithm, as the deepest recursion level will move the value of i beyond the array length, and then when you backtrack the loop at the previous recursion level will suddenly jump to that value of i, probably skipping several valid array entries it should have dealt with.
Always declare variables.
TJ and Trincot do a good job of fixing your program – I'm going to try to fix your thinking...
recursion is a functional heritage
Recursion is a concept that comes from functional style. Mixing it with imperative style is a source of much pain and confusion for new programmers.
To design a recursive function, we identify the base and inductive case(s)
base case – the first value of the input is Empty - if the input is empty, there are obviously no matches, therefore return 0
inductive case 1 – first is not empty, but it is an Array – recur on first plus recur on the rest of the values
inductive case 2 - first is not empty and not an array, therefore it is a plain value – if first matches match, return 1 for the match plus the result of recurring on the rest of the values
inductive case 3 - first is not empty, not an array, nor does it match match – recur on the rest of values
As a result of this implementation, all pain and suffering are removed from the program. We do not concern ourselves with local state variables, variable reassignment, array iterators, incrementing iterators, or other side effects like for
For brevity's sake, I replaced 'something' and 'anything' in your data with 'A' and 'B' respectively.
const Empty =
Symbol ()
const searchArrays = (match, [ first = Empty, ...rest ]) =>
{
/* no value */
if (first === Empty)
return 0
/* value is NOT empty */
else if (Array.isArray (first))
return searchArrays (match, first) + searchArrays (match, rest)
/* value is NOT array */
else if (first === match)
return 1 + searchArrays (match, rest)
/* value is NOT match */
else
return searchArrays (match, rest)
}
const data =
['A','A',['A','A'],'A',['A',['A','A'],'A',[['A','A',['B']],'A',],['A',['A','A','A',['B','A',['A','B']]]]],'B']
console.log (searchArrays ('A', data)) // 18
console.log (searchArrays ('B', data)) // 4
console.log (searchArrays ('C', data)) // 0
with functional style
Or encode searchArrays as a pure functional expression – this program is the same but exchanges the imperative-style if/else if/else and return statement syntaxes for ternary expressions
const Empty =
Symbol ()
const searchArrays = (match, [ first = Empty, ...rest ]) =>
first === Empty
? 0
: Array.isArray (first)
? searchArrays (match, first) + searchArrays (match, rest)
: first === match
? 1 + searchArrays (match, rest)
: searchArrays (match, rest)
const data =
['A','A',['A','A'],'A',['A',['A','A'],'A',[['A','A',['B']],'A',],['A',['A','A','A',['B','A',['A','B']]]]],'B']
console.log (searchArrays ('A', data)) // 18
console.log (searchArrays ('B', data)) // 4
console.log (searchArrays ('C', data)) // 0
without magic
Above, we use a rest parameter to destructure the input array. If this is confusing to you, it will help to see it in a simplified example. Note Empty is used so that our function can identify when to stop.
const Empty =
Symbol ()
const sum = ([ first = Empty, ...rest]) =>
first === Empty
? 0
: first + sum (rest)
console.log (sum ([ 1, 2, 3, 4 ])) // 10
console.log (sum ([])) // 0
This is a high-level feature included in newer versions of JavaScript, but we don't have to use it if it makes us uncomfortable. Below, we rewrite sum without the fanciful destructuring syntaxes
const isEmpty = (xs = []) =>
xs.length === 0
const first = (xs = []) =>
xs [0]
const rest = (xs = []) =>
xs.slice (1)
const sum = (values = []) =>
isEmpty (values)
? 0
: first (values) + sum (rest (values))
console.log (sum ([ 1, 2, 3, 4 ])) // 10
console.log (sum ([])) // 0
We can take our isEmpty, first, and rest functions and reimplement searchArrays now – notice the similarities; changes in bold
const searchArrays = (match, values = []) =>
isEmpty (values)
? 0
: Array.isArray (first (values))
? searchArrays (match, first (values)) + searchArrays (match, rest (values))
: first (values) === match
? 1 + searchArrays (match, rest (values))
: searchArrays (match, rest (values))
Expand the code snippet to see that it works the same
const isEmpty = (xs = []) =>
xs.length === 0
const first = (xs = []) =>
xs [0]
const rest = (xs = []) =>
xs.slice (1)
const searchArrays = (match, values = []) =>
isEmpty (values)
? 0
: Array.isArray (first (values))
? searchArrays (match, first (values)) + searchArrays (match, rest (values))
: first (values) === match
? 1 + searchArrays (match, rest (values))
: searchArrays (match, rest (values))
const data =
['A','A',['A','A'],'A',['A',['A','A'],'A',[['A','A',['B']],'A',],['A',['A','A','A',['B','A',['A','B']]]]],'B']
console.log (searchArrays ('A', data)) // 18
console.log (searchArrays ('B', data)) // 4
console.log (searchArrays ('C', data)) // 0
with powerful abstraction
As programmers, "traverse a data structure and do an operation on each element" is a common thing we need to do. Identifying these patterns and abstracting them into generic, reusable functions is at the core of higher-level thinking, which unlocks the ability to write higher-level programs like this
const searchArrays = (match, values = []) =>
deepFold ( (count, x) => x === match ? count + 1 : count
, 0
, values
)
This skill does not come automatically but there are techniques that can help you achieve higher-level thinking. In this answer I aim to give the reader a little insight. If this sort of thing interests you, I encourage you to take a look ^_^
recursion caution
JavaScript does not yet support tail call elimination which means extra precaution needs to be taken when writing recursive functions. For a code example that follows your program closely, see this answer.
When working with arrays, intermediate representations are needed regularly - particularly in connection with functional programming, in which data is often treated as immutable:
const square = x => x * x;
const odd = x => (x & 1) === 1;
let xs = [1,2,3,4,5,6,7,8,9];
// unnecessary intermediate array:
xs.map(square).filter(odd); // [1,4,9,16,25,36,49,64,81] => [1,9,25,49,81]
// even worse:
xs.map(square).filter(odd).slice(0, 2); // [1,9]
How can I avoid this behavior in Javascript/Ecmascript 2015 to obtain more efficient iterative algorithms?
Transducers are one possible way to avoid intermediate results within iterative algorithms. In order to understand them better you have to realize, that transducers by themselves are rather pointless:
// map transducer
let map = tf => rf => acc => x => rf(acc)(tf(x));
Why should we pass a reducing function to map for each invocation when that required function is always the same, concat namely?
The answer to this question is located in the official transducer definition:
Transducers are composable algorithmic transformations.
Transducer develop their expressive power only in conjunction with function composition:
const comp = f => g => x => f(g(x));
let xf = comp(filter(gt3))(map(inc));
foldL(xf(append))([])(xs);
comp is passed an arbitrary number of transducers (filter and map) and a single reduction function (append) as its final argument. From that comp builds a transformation sequence that requires no intermediate arrays. Each array element passes through the entire sequence before the next element is in line.
At this point, the definition of the map transducer is understandable: Composability requires matching function signatures.
Note that the order of evaluation of the transducer stack goes from left to right and is thus opposed to the normal order of function composition.
An important property of transducers is their ability to exit iterative processes early. In the chosen implementation, this behavior is achieved by implementing both transducers and foldL in continuation passing style. An alternative would be lazy evaluation. Here is the CPS implementation:
const foldL = rf => acc => xs => {
return xs.length
? rf(acc)(xs[0])(acc_ => foldL(rf)(acc_)(xs.slice(1)))
: acc;
};
// transducers
const map = tf => rf => acc => x => cont => rf(acc)(tf(x))(cont);
const filter = pred => rf => acc => x => cont => pred(x) ? rf(acc)(x)(cont) : cont(acc);
const takeN = n => rf => acc => x => cont =>
acc.length < n - 1 ? rf(acc)(x)(cont) : rf(acc)(x)(id);
// reducer
const append = xs => ys => xs.concat(ys);
// transformers
const inc = x => ++x;
const gt3 = x => x > 3;
const comp = f => g => x => f(g(x));
const liftC2 = f => x => y => cont => cont(f(x)(y));
const id = x => x;
let xs = [1,3,5,7,9,11];
let xf = comp(filter(gt3))(map(inc));
foldL(xf(liftC2(append)))([])(xs); // [6,8,10,12]
xf = comp(comp(filter(gt3))(map(inc)))(takeN(2));
foldL(xf(liftC2(append)))([])(xs); // [6,8]
Please note that this implementation is more of a proof of concept and no full-blown solution. The obvious benefits of transducers are:
no intermediate representations
purely functional and concise solution
genericity (work with any aggregate/collection, not just arrays)
extraordinary code reusability (reducers/transformers are common functions with their usual signatures)
Theoretically, CPS is as fast as imperative loops, at least in Ecmascript 2015, since all tail calls have the same return point and can thereby share the same stack frame (TCO).
It is considered controversial whether this approach is idiomatic enough for a Javascript solution. I prefer this functional style. However, the most common transducer libraries are implemented in object style and should look more familiar to OO developers.