Way of understanding recursion properly (javascript) - javascript

I'm trying to understand recursion and I have a somewhat decent understanding on how it intuitively works, however the aggregation of the returned data is the bit I struggle with.
For instance, in javascript to flatten an array I came up with the following code:
var _flatten = function(arr){
if(!arr instanceof Array) return arr;
var g = [];
function flatten(arr){
for(var i = 0; i < arr.length;i++){
if(arr[i] instanceof Array){
flatten(arr[i]);
}else{
g.push(arr[i]);
}
}
}
flatten(arr);
return g;
}
Turning something like this
var list = [1,2,3,4,5,6,[1,2,3,4,5,[1,2,3],[1,2,3,4]]];
into this:[ 1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 5, 1, 2, 3, 1, 2, 3, 4 ]
Which is fine and all, but the global variable g seems like some kind of cheap hack. I don't know how to think about the result returned when getting to the top of the stack and the return of the function propagating back down the stack. How would you implement this function, and how can I get a better grasp on this?
Thanks!

Instead of a global variable (to make it more proper recursion) you can send in g as a argument to the flatten function, and pass the modified g back out with a return statement.
var _flatten = function(arr) {
if (!arr instanceof Array) return arr;
function flatten(arr, g) {
for (var i = 0; i < arr.length; i++) {
if (arr[i] instanceof Array) {
flatten(arr[i], g);
} else {
g.push(arr[i]);
}
}
return g;
}
return flatten(arr, []);
}

There are many ways to write an array flattening procedure but I understand your question is about understanding recursion in general
The g isn't global in any sense of the word, but it is a symptom of the implementation choices. Mutation isn't necessarily bad so long as you keep it localized to your function – that g is never leaked outside of the function where someone could potentially observe the side effects.
Personally tho, I think it's better to break your problem into small generic procedures that make it much easier to describe your code.
You'll note that we don't have to setup temporary variables like g or handle incrementing array iterators like i – we don't even have to look at the .length property of the array. Not having to think about these things make it really nice to write our program in a declarative way.
// concatMap :: (a -> [b]) -> [a] -> [b]
const concatMap = f => xs => xs.map(f).reduce((x,y) => x.concat(y), [])
// id :: a -> a
const id = x => x
// flatten :: [[a]] -> [a]
const flatten = concatMap (id)
// isArray :: a -> Bool
const isArray = Array.isArray
// deepFlatten :: [[a]] -> [a]
const deepFlatten = concatMap (x => isArray(x) ? deepFlatten(x) : x)
// your sample data
let data = [0, [1, [2, [3, [4, 5], 6]]], [7, [8]], 9]
console.log(deepFlatten(data))
// [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ]
console.log(flatten(data))
// [ 0, 1, [ 2, [ 3, [ 4, 5 ], 6 ] ], 7, [ 8 ], 9 ]
First you'll see I made two different flattening procedures. flatten to flatten one level of nesting, and deepFlatten to flatten an arbitrarily deep array.
You'll also see I use Array.prototype.map and Array.prototype.reduce since these are provided by ECMAScript but that doesn't mean you're only limited to using procedures that you have. You can make your own procedures to fill the gaps. Here we made concatMap which is a useful generic provided by other languages such as Haskell.
Utilizing these generics, you can see that deepFlatten is an insanely simple procedure.
// deepFlatten :: [[a]] -> [a]
const deepFlatten = concatMap (x => isArray(x) ? deepFlatten(x) : x)
It's made up of a single expression including a lambda made up of a single if branch (by use of the ternary operator ?:)
Maybe it's a lot to take in, but hopefully it demonstrates that "writing a recursive procedure" isn't always about complicated setup of state variables and complex logic to control the recursion. In this case, it's a simple
if (condition) recurse else don't
If you have any questions, let me know. I'm happy to help you in any way I can.

In fact recursive coding is very simple and every aspects of it shall be contained in the function body. Any info that needs to pass shall be sent through arguments to the next recursion. Usage of global thingies is very ugly and shall be avoided. Accordingly i would simply do the in place array flattening job as follows;
var list = [1,2,3,4,5,6,[1,2,3,4,5,[1,2,3],[1,2,[9,8,7],3,4]]];
function flatArray(arr){
for (var i = 0, len = arr.length; i < len; i++)
Array.isArray(arr[i]) && (arr.splice(i,0,...flatArray(arr.splice(i,1)[0])), len = arr.length);
return arr;
}
console.log(flatArray(list));

Related

Write function outputs to file in javascript [duplicate]

Let's suppose I wanted a sort function that returns a sorted copy of the inputted array. I naively tried this
function sort(arr) {
return arr.sort();
}
and I tested it with this, which shows that my sort method is mutating the array.
var a = [2,3,7,5,3,7,1,3,4];
sort(a);
alert(a); //alerts "1,2,3,3,3,4,5,7,7"
I also tried this approach
function sort(arr) {
return Array.prototype.sort(arr);
}
but it doesn't work at all.
Is there a straightforward way around this, preferably a way that doesn't require hand-rolling my own sorting algorithm or copying every element of the array into a new one?
You need to copy the array before you sort it. One way with es6:
const sorted = [...arr].sort();
The spread-syntax as array literal (copied from mdn):
var arr = [1, 2, 3];
var arr2 = [...arr]; // like arr.slice()
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_operator
Just copy the array. There are many ways to do that:
function sort(arr) {
return arr.concat().sort();
}
// Or:
return Array.prototype.slice.call(arr).sort(); // For array-like objects
Try the following
function sortCopy(arr) {
return arr.slice(0).sort();
}
The slice(0) expression creates a copy of the array starting at element 0.
You can use slice with no arguments to copy an array:
var foo,
bar;
foo = [3,1,2];
bar = foo.slice().sort();
You can also do this
d = [20, 30, 10]
e = Array.from(d)
e.sort()
This way d will not get mutated.
function sorted(arr) {
temp = Array.from(arr)
return temp.sort()
}
//Use it like this
x = [20, 10, 100]
console.log(sorted(x))
Update - Array.prototype.toSorted() proposal
The Array.prototype.toSorted(compareFn) -> Array is a new method which was proposed to be added to the Array.prototype and is currently in stage 3 (Soon to be available).
This method will keep the target Array untouched and returns a copy of it with the change performed instead.
Anyone who wants to do a deep copy (e.g. if your array contains objects) can use:
let arrCopy = JSON.parse(JSON.stringify(arr))
Then you can sort arrCopy without changing arr.
arrCopy.sort((obj1, obj2) => obj1.id > obj2.id)
Please note: this can be slow for very large arrays.
Try this to sort the numbers. This does not mutate the original array.
function sort(arr) {
return arr.slice(0).sort((a,b) => a-b);
}
There's a new tc39 proposal, which adds a toSorted method to Array that returns a copy of the array and doesn't modify the original.
For example:
const sequence = [3, 2, 1];
sequence.toSorted(); // => [1, 2, 3]
sequence; // => [3, 2, 1]
As it's currently in stage 3, it will likely be implemented in browser engines soon, but in the meantime a polyfill is available here or in core-js.
I think that my answer is a bit too late but if someone come across this issue again the solution may be useful.
I can propose yet another approach with a native function which returns a sorted array.
This code still mutates the original object but instead of native behaviour this implementation returns a sorted array.
// Remember that it is not recommended to extend build-in prototypes
// or even worse override native functions.
// You can create a seperate function if you like
// You can specify any name instead of "sorted" (Python-like)
// Check for existence of the method in prototype
if (typeof Array.prototype.sorted == "undefined") {
// If it does not exist you provide your own method
Array.prototype.sorted = function () {
Array.prototype.sort.apply(this, arguments);
return this;
};
}
This way of solving the problem was ideal in my situation.
You can also extend the existing Array functionality. This allows chaining different array functions together.
Array.prototype.sorted = function (compareFn) {
const shallowCopy = this.slice();
shallowCopy.sort(compareFn);
return shallowCopy;
}
[1, 2, 3, 4, 5, 6]
.filter(x => x % 2 == 0)
.sorted((l, r) => r - l)
.map(x => x * 2)
// -> [12, 8, 4]
Same in typescript:
// extensions.ts
Array.prototype.sorted = function (compareFn?: ((a: any, b: any) => number) | undefined) {
const shallowCopy = this.slice();
shallowCopy.sort(compareFn);
return shallowCopy;
}
declare global {
interface Array<T> {
sorted(compareFn?: (a: T, b: T) => number): Array<T>;
}
}
export {}
// index.ts
import 'extensions.ts';
[1, 2, 3, 4, 5, 6]
.filter(x => x % 2 == 0)
.sorted((l, r) => r - l)
.map(x => x * 2)
// -> [12, 8, 4]

Calculate the mathematical difference of each element between two arrays

Given two array of same length, return an array containing the mathematical difference of each element between two arrays.
Example:
a = [3, 4, 7]
b = [3, 9, 10 ]
results: c = [(3-3), (9-4), (10,7)] so that c = [0, 5 3]
let difference = []
function calculateDifferenceArray(data_one, data_two){
let i = 0
for (i in data_duplicates) {
difference.push(data_two[i]-data_one[i])
}
console.log(difference)
return difference
}
calculateDifferenceArray((b, a))
It does work.
I am wondering if there is a more elegant way to achieve the same
Use map as following:
const a = [3, 4, 7]
const b = [3, 9, 10]
const c = b.map((e, i) => e - a[i])
// [0, 5, 3]
for-in isn't a good tool for looping through arrays (more in my answer here).
"More elegant" is subjective, but it can be more concise and, to my eyes, clear if you use map:
function calculateDifferenceArray(data_one, data_two){
return data_one.map((v1, index) => data_two[index] - v1)
}
calculateDifferenceArray(b, a) // < Note just one set of () here
Live Example:
const a = [3, 4, 7];
const b = [3, 9, 10 ];
function calculateDifferenceArray(data_one, data_two){
return data_one.map((v1, index) => v1 - data_two[index]);
}
console.log(calculateDifferenceArray(b, a));
or if you prefer it slightly more verbose for debugging et. al.:
function calculateDifferenceArray(data_one, data_two){
return data_one.map((v1, index) => {
const v2 = data_two[index]
return v1 - v2
})
}
calculateDifferenceArray(b, a)
A couple of notes on the version of this in the question:
It seems to loop over something (data_duplicates?) unrelated to the two arrays passed into the method.
It pushes to an array declared outside the function. That means if you call the function twice, it'll push the second set of values into the array but leave the first set of values there. That declaration and initialization should be inside the function, not outside it.
You had two sets of () in the calculateDifferenceArray call. That meant you only passed one argument to the function, because the inner () wrapped an expression with the comma operator, which takes its second operand as its result.
You had the order of the subtraction operation backward.
You could use higher order array method map. It would work something like this:
let a = [2,3,4];
let b = [3,5,7];
let difference = a.map((n,i)=>n-b[i]);
console.log(difference);
you can read more about map here

benefits of map over for loop in terms of safety [duplicate]

This question already has answers here:
Imperative vs Declarative code [closed]
(3 answers)
Closed 4 years ago.
I have a vague understanding of what does it mean "provide immutability" when talking about using map over for loop. Because calling Array.prototype.map on array does not prevent you from altering original array. Let's say we have single threaded asynchronous program. In these circumstances is there an "immutability" benefit that map provides over for loop?
I can imagine the following code:
function A() {
const arr = [1, 2, 3, 4, 5]
const anotherArr = arr.map((e, i) => {
arr[i] = 'changed'
// here goes some asynchronous code
return e * 2
})
// now arr is ['changed', 'changed', 'changed', 'changed', 'changed']
// and anotherArr is [2, 3, 6, 8, 10]
}
function B() {
const arr = [1, 2, 3, 4, 5]
const anotherArr = []
for (let i = 0, len = arr.length; i < len; i++) {
anotherArr[i] = arr[i] * 2
arr[i] = 'changed'
// here goes some asynchronous code
}
// and again:
// now arr is ['changed', 'changed', 'changed', 'changed', 'changed']
// and anotherArr is [2, 3, 6, 8, 10]
}
I guess that probably in function B anotherArr is created manually before populating. Therefore something in the same scope can alter it before for loop is completed. So my question is: "Is there a benefit of using map over for loop in single threaded asynchronous environment in terms of safety"?
Editing
I rephrase my question: "Is there some kind of immutability that Array.prototype.map function
provides (because it belongs to functional paradigm)".
As I understand now the map function in javascript is not about immutability,
it is about hiding some implementation details (how resulting array is constructed).
So map function is a delclarative way of doing things. for loop is an imperative
way of doing things (which brings more freedom and more responsibility I guess).
map, reduce, filter, forEach, etc.
Methods above, in my mind, is nice for FP(functional programming).
Of course, you can use them instead of normal for loop,
but normal for loop is faster than them(https://hackernoon.com/javascript-performance-test-for-vs-for-each-vs-map-reduce-filter-find-32c1113f19d7).
So in my idea, if your don't care performance, use map, reduce, etc. is elegant and will get lesser and more readable code.

Transducer flatten and uniq

I'm wondering if there is a way by using a transducer for flattening a list and filter on unique values?
By chaining, it is very easy:
import {uniq, flattenDeep} from 'lodash';|
const arr = [1, 2, [2, 3], [1, [4, 5]]];
uniq(flattendDeep(arr)); // -> [1, 2, 3, 4, 5]
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.core.min.js"></script>
But here we loop twice over the list (+ n by depth layer). Not ideal.
What I'm trying to achieve is to use a transducer for this case.
I've read Ramda documentation about it https://ramdajs.com/docs/#transduce, but I still can't find a way to write it correctly.
Currently, I use a reduce function with a recursive function inside it:
import {isArray} from 'lodash';
const arr = [1, 2, [2, 3], [1, [4, 5]]];
const flattenDeepUniq = (p, c) => {
if (isArray(c)) {
c.forEach(o => p = flattenDeepUniq(p, o));
}
else {
p = !p.includes(c) ? [...p, c] : p;
}
return p;
};
arr.reduce(flattenDeepUniq, []) // -> [1, 2, 3, 4, 5]
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.core.min.js"></script>
We have one loop over the elements (+ n loop with deep depth layers) which seems better and more optimized.
Is this even possible to use a transducer and an iterator in this case?
For more information about Ramda transduce function: https://gist.github.com/craigdallimore/8b5b9d9e445bfa1e383c569e458c3e26
Transducers don't make much sense here. Your data structure is recursive. The best code to deal with recursive structures usually requires recursive algorithms.
How transducers work
(Roman Liutikov wrote a nice introduction to transducers.)
Transducers are all about replacing multiple trips through the same data with a single one, combining the atomic operations of the steps into a single operation.
A transducer would be a good fit to turn this code:
xs.map(x => x * 7).map(x => x + 3).filter(isOdd(x)).take(5)
// ^ ^ ^ ^
// \ \ \ `------ Iteration 4
// \ \ `--------------------- Iteration 3
// \ `-------------------------------------- Iteration 2
// `----------------------------------------------------- Iteration 1
into something like this:
xs.reduce((r, x) => r.length >= 5 ? res : isOdd(x * 7 + 3) ? res.concat(x * 7 - 3) : res, [])
// ^
// `------------------------------------------------------- Just one iteration
In Ramda, because map, filter, and take are transducer-enabled, we can convert
const foo = pipe(
map(multiply(7)),
map(add(3)),
filter(isOdd),
take(3)
)
foo([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) //=> [17, 31, 45]
(which iterates four times through the data) into
const bar = compose(
map(multiply(7)),
map(add(3)),
filter(isOdd),
take(3)
)
into([], bar, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) //=> [17, 31, 45]
which only iterates it once. (Note the switch from pipe to compose. Tranducers compose in an order opposite that of plain functions.)
Note the key point of such transducers is that they all operate similarly. map converts a list to another list, as do filter and take. While you could have transducers that operate on different types, and map and filter might also work on such types polymorphically, they will only work together if you're combining functions which operate on the same type.
Flatten is a weak fit for transducers
Your structure is more complex. While we could certainly create a function that will crawl it in in some manner (preorder, postorder), and could thus probably start of a transducer pipeline with it, the logical way to deal with a recursive structure is with a recursive algorithm.
A simple way to flatten such a nested structure is something like this:
const flatten = xs => xs.reduce(
(a, x) => concat(a, isArray(x) ? flatten(x) : [x]),
[]
);
(For various technical reasons, Ramda's code is significantly more complex.)
This recursive version, though, is not well-suited to work with transducers, which essentially have to work step-by-step.
Uniq poorly suited for transducers
uniq, on the other hand, makes less sense with such transducers. The problem is that the container used by uniq, if you're going to get any benefit from transducers, has to be one which has quick inserts and quick lookups, a Set or an Object most likely. Let's say we use a Set. Then we have a problem, since our flatten operates on lists.
A different approach
Since we can't easily fold existing functions into one that does what you're looking for, we probably need to write a one-off.
The structure of the earlier solution makes it fairly easy to add the uniqueness constraint. Again, that was:
const flatten = xs => xs.reduce(
(a, x) => concat(a, isArray(x) ? flatten(x) : [x]),
[]
);
With a helper function for adding all elements to a Set:
const addAll = (set, xs) => xs.reduce((s, x) => s.add(x), set)
We can write a function that flattens, keeping only the unique values:
const flattenUniq = xs => xs.reduce(
(s, x) => addAll(s, isArray(x) ? flattenUniq(x) : [x]),
new Set()
)
Note that this has much the structure of the above, switching only to use a Set and therefore switching from concat to our addAll.
Of course you might want an array, at the end. We can do that just by wrapping this with a Set -> Array function, like this:
const flattenUniq = xs => Array.from(xs.reduce(
(s, x) => addAll(s, isArray(x) ? flattenUniq(x) : [x]),
new Set()
))
You also might consider keeping this result as a Set. If you really want a collection of unique values, a Set is the logical choice.
Such a function does not have the elegance of a points-free transduced function, but it works, and the exposed plumbing makes the relationships with the original data structure and with the plain flatten function much more clear.
I guess you can think of this entire long answer as just a long-winded way of pointing out what user633183 said in the comments: "neither flatten nor uniq are good use cases for transducers."
Uniq is now a transducer in Ramda so you can use it directly. And as for flatten you can traverse the tree up front to produce a bunch of flat values
const arr = [1, 2, [2, 3], [1, [4, 5]]];
const deepIterate = function*(list) {
for (const it of list) {
yield* Array.isArray(it) ? deepIterate(it) : [it];
}
}
R.into([], R.uniq(), deepIterate(arr)) // -> [1, 2, 3, 4, 5]
This lets you compose additional transducers
R.into([], R.compose(R.uniq(), R.filter(isOdd), R.take(5)), deepIterate(arr))

How can I replicate Python's dict.items() in Javascript?

In Javascript I have a JSON object from which I want to process just the items:
var json = {
itema: {stuff: 'stuff'},
itemb: {stuff: 'stuff'},
itemc: {stuff: 'stuff'},
itemd: {stuff: 'stuff'}
}
In Python I could do
print json.items()
[{stuff: 'stuff'},{stuff: 'stuff'},{stuff: 'stuff'},{stuff: 'stuff'}]
Can I do this is js?
You cannot do this the same way as in python without extending Object.prototype, which you don't want to do, because it is the path to misery.
You could create a helper function easily that could loop over the object and put the value into an array however, like this:
function items(obj) {
var i, arr = [];
for(i in obj) {
arr.push(obj[i]);
}
return arr;
}
Ps: JSON is a data format, what you have is an object literal.
In python dict.items returns a list of tuples containing both the keys and the values of the dict. Javascript doesn't have tuples, so it would have to be a nested array.
If you'll excuse me a little python code to show the difference.
>>> {1:2, 2:3}.items()
[(1, 2), (2, 3)]
>>> {1:2, 2:3}.values()
[2, 3]
I see the accepted answer returns an array of the objects values, which is the equivalent of the python function dict.values. What is asked for is dict.items. To do this just loop and build up a nested array of 2 element arrays.
function items(obj){
var ret = [];
for(v in obj){
ret.push(Object.freeze([v, obj[v]]));
}
return Object.freeze(ret);
}
I put the Object.freeze in to be pedantic and enforce that the returned value shouldn't be altered, to emulate the immutability of python tuples. Obviously it still works if you take it out.
It should be noted that doing this somewhat defeats the purpose of items in that it is used when iterating over the object in a linear rather than associative fashion and it avoids calculating the hash value to look up each element in the associative array. For small objects who cares but for large ones it might slow you down and there might be a more idiomatic way to do what you want in javascript.
Another newer way to do it is to use Object.entries() which will do exactly what you want.
Object.entries({1:1, 2:2, 3:3})
.forEach(function(v){
console.log(v)
});
The support is limited to those browser versions mentioned in the documentation.
Thanks to recent updates to JavaScript - we can solve this now:
function items(iterable) {
return {
[Symbol.iterator]: function* () {
for (key in iterable) {
yield [key, iterable[key]];
}
}
};
}
for (const [key, val] of items({"a": 3, "b": 4, "c": 5})) {
console.log(key, val);
}
// a 3
// b 4
// c 5
for (const [key, val] of items(["a", "b", "c"])) {
console.log(key, val);
}
// 0 a
// 1 b
// 2 c
ubershmekel's answer makes use of lazy evaluation, compared to my answer below which uses eager evaluation. Lazy evaluation has many benefits which make it much more appropriate for performance reasons in some cases, but the transparency of eager evaluation can be a development speed boon that may make it preferable in other cases.
const keys = Object.keys;
const values = object =>
keys(object).map(key => object[key]);
const items = object =>
keys(object).map(key => [key, object[key]])
obj = {a: 10, b: 20, c: 30};
keys(obj) // ["a", "b", "c"]
values(obj) // [10, 20, 30]
items(obj) // [["a", 10], ["b", 20], ["c", 30]]
items(obj).forEach(([k, v]) => console.log(k, v))
// a 10
// b 20
// c 30
Not sure what you want to do but I guess Json.stringify will do something like that. See http://www.json.org/js.html

Categories