In Javascript I have a JSON object from which I want to process just the items:
var json = {
itema: {stuff: 'stuff'},
itemb: {stuff: 'stuff'},
itemc: {stuff: 'stuff'},
itemd: {stuff: 'stuff'}
}
In Python I could do
print json.items()
[{stuff: 'stuff'},{stuff: 'stuff'},{stuff: 'stuff'},{stuff: 'stuff'}]
Can I do this is js?
You cannot do this the same way as in python without extending Object.prototype, which you don't want to do, because it is the path to misery.
You could create a helper function easily that could loop over the object and put the value into an array however, like this:
function items(obj) {
var i, arr = [];
for(i in obj) {
arr.push(obj[i]);
}
return arr;
}
Ps: JSON is a data format, what you have is an object literal.
In python dict.items returns a list of tuples containing both the keys and the values of the dict. Javascript doesn't have tuples, so it would have to be a nested array.
If you'll excuse me a little python code to show the difference.
>>> {1:2, 2:3}.items()
[(1, 2), (2, 3)]
>>> {1:2, 2:3}.values()
[2, 3]
I see the accepted answer returns an array of the objects values, which is the equivalent of the python function dict.values. What is asked for is dict.items. To do this just loop and build up a nested array of 2 element arrays.
function items(obj){
var ret = [];
for(v in obj){
ret.push(Object.freeze([v, obj[v]]));
}
return Object.freeze(ret);
}
I put the Object.freeze in to be pedantic and enforce that the returned value shouldn't be altered, to emulate the immutability of python tuples. Obviously it still works if you take it out.
It should be noted that doing this somewhat defeats the purpose of items in that it is used when iterating over the object in a linear rather than associative fashion and it avoids calculating the hash value to look up each element in the associative array. For small objects who cares but for large ones it might slow you down and there might be a more idiomatic way to do what you want in javascript.
Another newer way to do it is to use Object.entries() which will do exactly what you want.
Object.entries({1:1, 2:2, 3:3})
.forEach(function(v){
console.log(v)
});
The support is limited to those browser versions mentioned in the documentation.
Thanks to recent updates to JavaScript - we can solve this now:
function items(iterable) {
return {
[Symbol.iterator]: function* () {
for (key in iterable) {
yield [key, iterable[key]];
}
}
};
}
for (const [key, val] of items({"a": 3, "b": 4, "c": 5})) {
console.log(key, val);
}
// a 3
// b 4
// c 5
for (const [key, val] of items(["a", "b", "c"])) {
console.log(key, val);
}
// 0 a
// 1 b
// 2 c
ubershmekel's answer makes use of lazy evaluation, compared to my answer below which uses eager evaluation. Lazy evaluation has many benefits which make it much more appropriate for performance reasons in some cases, but the transparency of eager evaluation can be a development speed boon that may make it preferable in other cases.
const keys = Object.keys;
const values = object =>
keys(object).map(key => object[key]);
const items = object =>
keys(object).map(key => [key, object[key]])
obj = {a: 10, b: 20, c: 30};
keys(obj) // ["a", "b", "c"]
values(obj) // [10, 20, 30]
items(obj) // [["a", 10], ["b", 20], ["c", 30]]
items(obj).forEach(([k, v]) => console.log(k, v))
// a 10
// b 20
// c 30
Not sure what you want to do but I guess Json.stringify will do something like that. See http://www.json.org/js.html
Related
Let's suppose I wanted a sort function that returns a sorted copy of the inputted array. I naively tried this
function sort(arr) {
return arr.sort();
}
and I tested it with this, which shows that my sort method is mutating the array.
var a = [2,3,7,5,3,7,1,3,4];
sort(a);
alert(a); //alerts "1,2,3,3,3,4,5,7,7"
I also tried this approach
function sort(arr) {
return Array.prototype.sort(arr);
}
but it doesn't work at all.
Is there a straightforward way around this, preferably a way that doesn't require hand-rolling my own sorting algorithm or copying every element of the array into a new one?
You need to copy the array before you sort it. One way with es6:
const sorted = [...arr].sort();
The spread-syntax as array literal (copied from mdn):
var arr = [1, 2, 3];
var arr2 = [...arr]; // like arr.slice()
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_operator
Just copy the array. There are many ways to do that:
function sort(arr) {
return arr.concat().sort();
}
// Or:
return Array.prototype.slice.call(arr).sort(); // For array-like objects
Try the following
function sortCopy(arr) {
return arr.slice(0).sort();
}
The slice(0) expression creates a copy of the array starting at element 0.
You can use slice with no arguments to copy an array:
var foo,
bar;
foo = [3,1,2];
bar = foo.slice().sort();
You can also do this
d = [20, 30, 10]
e = Array.from(d)
e.sort()
This way d will not get mutated.
function sorted(arr) {
temp = Array.from(arr)
return temp.sort()
}
//Use it like this
x = [20, 10, 100]
console.log(sorted(x))
Update - Array.prototype.toSorted() proposal
The Array.prototype.toSorted(compareFn) -> Array is a new method which was proposed to be added to the Array.prototype and is currently in stage 3 (Soon to be available).
This method will keep the target Array untouched and returns a copy of it with the change performed instead.
Anyone who wants to do a deep copy (e.g. if your array contains objects) can use:
let arrCopy = JSON.parse(JSON.stringify(arr))
Then you can sort arrCopy without changing arr.
arrCopy.sort((obj1, obj2) => obj1.id > obj2.id)
Please note: this can be slow for very large arrays.
Try this to sort the numbers. This does not mutate the original array.
function sort(arr) {
return arr.slice(0).sort((a,b) => a-b);
}
There's a new tc39 proposal, which adds a toSorted method to Array that returns a copy of the array and doesn't modify the original.
For example:
const sequence = [3, 2, 1];
sequence.toSorted(); // => [1, 2, 3]
sequence; // => [3, 2, 1]
As it's currently in stage 3, it will likely be implemented in browser engines soon, but in the meantime a polyfill is available here or in core-js.
I think that my answer is a bit too late but if someone come across this issue again the solution may be useful.
I can propose yet another approach with a native function which returns a sorted array.
This code still mutates the original object but instead of native behaviour this implementation returns a sorted array.
// Remember that it is not recommended to extend build-in prototypes
// or even worse override native functions.
// You can create a seperate function if you like
// You can specify any name instead of "sorted" (Python-like)
// Check for existence of the method in prototype
if (typeof Array.prototype.sorted == "undefined") {
// If it does not exist you provide your own method
Array.prototype.sorted = function () {
Array.prototype.sort.apply(this, arguments);
return this;
};
}
This way of solving the problem was ideal in my situation.
You can also extend the existing Array functionality. This allows chaining different array functions together.
Array.prototype.sorted = function (compareFn) {
const shallowCopy = this.slice();
shallowCopy.sort(compareFn);
return shallowCopy;
}
[1, 2, 3, 4, 5, 6]
.filter(x => x % 2 == 0)
.sorted((l, r) => r - l)
.map(x => x * 2)
// -> [12, 8, 4]
Same in typescript:
// extensions.ts
Array.prototype.sorted = function (compareFn?: ((a: any, b: any) => number) | undefined) {
const shallowCopy = this.slice();
shallowCopy.sort(compareFn);
return shallowCopy;
}
declare global {
interface Array<T> {
sorted(compareFn?: (a: T, b: T) => number): Array<T>;
}
}
export {}
// index.ts
import 'extensions.ts';
[1, 2, 3, 4, 5, 6]
.filter(x => x % 2 == 0)
.sorted((l, r) => r - l)
.map(x => x * 2)
// -> [12, 8, 4]
I'm wondering if there is a way by using a transducer for flattening a list and filter on unique values?
By chaining, it is very easy:
import {uniq, flattenDeep} from 'lodash';|
const arr = [1, 2, [2, 3], [1, [4, 5]]];
uniq(flattendDeep(arr)); // -> [1, 2, 3, 4, 5]
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.core.min.js"></script>
But here we loop twice over the list (+ n by depth layer). Not ideal.
What I'm trying to achieve is to use a transducer for this case.
I've read Ramda documentation about it https://ramdajs.com/docs/#transduce, but I still can't find a way to write it correctly.
Currently, I use a reduce function with a recursive function inside it:
import {isArray} from 'lodash';
const arr = [1, 2, [2, 3], [1, [4, 5]]];
const flattenDeepUniq = (p, c) => {
if (isArray(c)) {
c.forEach(o => p = flattenDeepUniq(p, o));
}
else {
p = !p.includes(c) ? [...p, c] : p;
}
return p;
};
arr.reduce(flattenDeepUniq, []) // -> [1, 2, 3, 4, 5]
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.core.min.js"></script>
We have one loop over the elements (+ n loop with deep depth layers) which seems better and more optimized.
Is this even possible to use a transducer and an iterator in this case?
For more information about Ramda transduce function: https://gist.github.com/craigdallimore/8b5b9d9e445bfa1e383c569e458c3e26
Transducers don't make much sense here. Your data structure is recursive. The best code to deal with recursive structures usually requires recursive algorithms.
How transducers work
(Roman Liutikov wrote a nice introduction to transducers.)
Transducers are all about replacing multiple trips through the same data with a single one, combining the atomic operations of the steps into a single operation.
A transducer would be a good fit to turn this code:
xs.map(x => x * 7).map(x => x + 3).filter(isOdd(x)).take(5)
// ^ ^ ^ ^
// \ \ \ `------ Iteration 4
// \ \ `--------------------- Iteration 3
// \ `-------------------------------------- Iteration 2
// `----------------------------------------------------- Iteration 1
into something like this:
xs.reduce((r, x) => r.length >= 5 ? res : isOdd(x * 7 + 3) ? res.concat(x * 7 - 3) : res, [])
// ^
// `------------------------------------------------------- Just one iteration
In Ramda, because map, filter, and take are transducer-enabled, we can convert
const foo = pipe(
map(multiply(7)),
map(add(3)),
filter(isOdd),
take(3)
)
foo([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) //=> [17, 31, 45]
(which iterates four times through the data) into
const bar = compose(
map(multiply(7)),
map(add(3)),
filter(isOdd),
take(3)
)
into([], bar, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) //=> [17, 31, 45]
which only iterates it once. (Note the switch from pipe to compose. Tranducers compose in an order opposite that of plain functions.)
Note the key point of such transducers is that they all operate similarly. map converts a list to another list, as do filter and take. While you could have transducers that operate on different types, and map and filter might also work on such types polymorphically, they will only work together if you're combining functions which operate on the same type.
Flatten is a weak fit for transducers
Your structure is more complex. While we could certainly create a function that will crawl it in in some manner (preorder, postorder), and could thus probably start of a transducer pipeline with it, the logical way to deal with a recursive structure is with a recursive algorithm.
A simple way to flatten such a nested structure is something like this:
const flatten = xs => xs.reduce(
(a, x) => concat(a, isArray(x) ? flatten(x) : [x]),
[]
);
(For various technical reasons, Ramda's code is significantly more complex.)
This recursive version, though, is not well-suited to work with transducers, which essentially have to work step-by-step.
Uniq poorly suited for transducers
uniq, on the other hand, makes less sense with such transducers. The problem is that the container used by uniq, if you're going to get any benefit from transducers, has to be one which has quick inserts and quick lookups, a Set or an Object most likely. Let's say we use a Set. Then we have a problem, since our flatten operates on lists.
A different approach
Since we can't easily fold existing functions into one that does what you're looking for, we probably need to write a one-off.
The structure of the earlier solution makes it fairly easy to add the uniqueness constraint. Again, that was:
const flatten = xs => xs.reduce(
(a, x) => concat(a, isArray(x) ? flatten(x) : [x]),
[]
);
With a helper function for adding all elements to a Set:
const addAll = (set, xs) => xs.reduce((s, x) => s.add(x), set)
We can write a function that flattens, keeping only the unique values:
const flattenUniq = xs => xs.reduce(
(s, x) => addAll(s, isArray(x) ? flattenUniq(x) : [x]),
new Set()
)
Note that this has much the structure of the above, switching only to use a Set and therefore switching from concat to our addAll.
Of course you might want an array, at the end. We can do that just by wrapping this with a Set -> Array function, like this:
const flattenUniq = xs => Array.from(xs.reduce(
(s, x) => addAll(s, isArray(x) ? flattenUniq(x) : [x]),
new Set()
))
You also might consider keeping this result as a Set. If you really want a collection of unique values, a Set is the logical choice.
Such a function does not have the elegance of a points-free transduced function, but it works, and the exposed plumbing makes the relationships with the original data structure and with the plain flatten function much more clear.
I guess you can think of this entire long answer as just a long-winded way of pointing out what user633183 said in the comments: "neither flatten nor uniq are good use cases for transducers."
Uniq is now a transducer in Ramda so you can use it directly. And as for flatten you can traverse the tree up front to produce a bunch of flat values
const arr = [1, 2, [2, 3], [1, [4, 5]]];
const deepIterate = function*(list) {
for (const it of list) {
yield* Array.isArray(it) ? deepIterate(it) : [it];
}
}
R.into([], R.uniq(), deepIterate(arr)) // -> [1, 2, 3, 4, 5]
This lets you compose additional transducers
R.into([], R.compose(R.uniq(), R.filter(isOdd), R.take(5)), deepIterate(arr))
In JavaScript, how can I repeat an array which contains multiple elements, in a concise manner?
In Ruby, you could do
irb(main):001:0> ["a", "b", "c"] * 3
=> ["a", "b", "c", "a", "b", "c", "a", "b", "c"]
I looked up the lodash library, and didn't find anything that was directly applicable. Feature request: repeat arrays. is a feature request for adding it to lodash, and the best workaround given there is
const arrayToRepeat = [1, 2, 3];
const numberOfRepeats = 3;
const repeatedArray = _.flatten(_.times(numberOfRepeats, _.constant(arrayToRepeat)));
The questions Most efficient way to create a zero filled JavaScript array? and Create an array with same element repeated multiple times focus on repeating just a single element multiple times, whereas I want to repeat an array which has multiple elements.
Using reasonably well-maintained libraries is acceptable.
No need for any library, you can use Array.from to create an array of arrays you want repeated, and then flatten using [].concat and spread:
const makeRepeated = (arr, repeats) =>
[].concat(...Array.from({ length: repeats }, () => arr));
console.log(makeRepeated([1, 2, 3], 2));
On newer browsers, you can use Array.prototype.flat instead of [].concat(...:
const makeRepeated = (arr, repeats) =>
Array.from({ length: repeats }, () => arr).flat();
console.log(makeRepeated([1, 2, 3], 2));
You can use the Array constructor along with its fill method to fill it a number of times of the array you want to repeat, then concat them (the subarrays) into a single array:
const repeatedArray = [].concat(...Array(num).fill(arr));
Note: On older browsers (pre-ES6), you can use Function#apply to mimic the rest syntax above (concat will be called with each of the sub arrays passed to it as argument):
var repeatedArray = [].concat.apply([], Array(num).fill(arr));
Example:
const arrayToRepeat = [1, 2, 3];
const numberOfRepeats = 3;
const repeatedArray = [].concat(...Array(numberOfRepeats).fill(arrayToRepeat));
console.log(repeatedArray);
const repeat = (a, n) => Array(n).fill(a).flat(1)
console.log( repeat([1, 2], 3) )
Recursive alternative:
const repeat = (a, n) => n ? a.concat(repeat(a, --n)) : [];
console.log( repeat([1, 2], 3) )
My first idea would be creating a function like this
let repeat = (array, numberOfTimes) => Array(numberOfTimes).fill(array).reduce((a, b) => [...a, ...b], [])
console.log(repeat(["a", "b", "c"], 3))
using the fill method and reduce
Ideally, instead of using reduce you could use flatten but there's yet no support in browsers
Try
Array(3).fill(["a", "b", "c"]).flat()
console.log( Array(3).fill(["a", "b", "c"]).flat() );
Unfortunately, it is not possible natively in JS (Also operator overloading is not possible, so we can not use something like Array.prototype.__mul__), but we can create an Array with the proper target length, fill with placeholders, then re-map the values:
const seqFill = (filler, multiplier) =>
Array(filler.length * multiplier)
.fill(1)
.map(
(_, i) => filler[i % filler.length]
);
console.log(seqFill([1,2,3], 3));
console.log(seqFill(['a','b','c', 'd'], 5));
Or another way by hooking into the Array prototype, you could use the syntax of Array#seqFill(multiplier), this is probably the closest you can get to ruby syntax (rb can do basically everything with operator overloading, but JS can't):
Object.defineProperty(Array.prototype, 'seqFill', {
enumerable: false,
value: function(multiplier) {
return Array(this.length * multiplier).fill(1).map((_, i) => this[i % this.length]);
}
});
console.log([1,2,3].seqFill(3));
Apart from the obvious [].concat + Array.from({length: 3}, …)/fill() solution, using generators will lead to elegant code:
function* concat(iterable) {
for (const x of iterable)
for (const y of x)
yield y;
}
function* repeat(n, x) {
while (n-- > 0)
yield x;
}
const repeatedArray = Array.from(concat(repeat(3, [1, 2, 3])));
You can also shorten it to
function* concatRepeat(n, x) {
while (n-- > 0)
yield* x;
}
const repeatedArray = Array.from(concatRepeat(3, [1, 2, 3]));
Though other methods works simply, these too.
Array.fill() and Array.from() in previous methods will not work in IE. MDN Docs for Reference
Mehtod 1 : Loop and push (Array.prototype.push) the same into the array.
function mutateArray(arr,n)
{
var temp = [];
while(n--) Array.prototype.push.apply(temp,arr);
return temp;
}
var a = [1,2,3,4,5];
console.log(mutateArray(a,3));
Method 2: Join the array elements and String.repeat() to mutate the string and return the split string.
Note: The repeat method is not supported yet in IE and Android webviews.
function mutateArray(arr,n)
{
var arr = (arr.join("$")+"$").repeat(n).split("$");
arr.pop(); //To remove the last empty element
return arr;
}
var a = [1,2,3,4,5];
console.log(mutateArray(a,3));
I'm trying to understand recursion and I have a somewhat decent understanding on how it intuitively works, however the aggregation of the returned data is the bit I struggle with.
For instance, in javascript to flatten an array I came up with the following code:
var _flatten = function(arr){
if(!arr instanceof Array) return arr;
var g = [];
function flatten(arr){
for(var i = 0; i < arr.length;i++){
if(arr[i] instanceof Array){
flatten(arr[i]);
}else{
g.push(arr[i]);
}
}
}
flatten(arr);
return g;
}
Turning something like this
var list = [1,2,3,4,5,6,[1,2,3,4,5,[1,2,3],[1,2,3,4]]];
into this:[ 1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 5, 1, 2, 3, 1, 2, 3, 4 ]
Which is fine and all, but the global variable g seems like some kind of cheap hack. I don't know how to think about the result returned when getting to the top of the stack and the return of the function propagating back down the stack. How would you implement this function, and how can I get a better grasp on this?
Thanks!
Instead of a global variable (to make it more proper recursion) you can send in g as a argument to the flatten function, and pass the modified g back out with a return statement.
var _flatten = function(arr) {
if (!arr instanceof Array) return arr;
function flatten(arr, g) {
for (var i = 0; i < arr.length; i++) {
if (arr[i] instanceof Array) {
flatten(arr[i], g);
} else {
g.push(arr[i]);
}
}
return g;
}
return flatten(arr, []);
}
There are many ways to write an array flattening procedure but I understand your question is about understanding recursion in general
The g isn't global in any sense of the word, but it is a symptom of the implementation choices. Mutation isn't necessarily bad so long as you keep it localized to your function – that g is never leaked outside of the function where someone could potentially observe the side effects.
Personally tho, I think it's better to break your problem into small generic procedures that make it much easier to describe your code.
You'll note that we don't have to setup temporary variables like g or handle incrementing array iterators like i – we don't even have to look at the .length property of the array. Not having to think about these things make it really nice to write our program in a declarative way.
// concatMap :: (a -> [b]) -> [a] -> [b]
const concatMap = f => xs => xs.map(f).reduce((x,y) => x.concat(y), [])
// id :: a -> a
const id = x => x
// flatten :: [[a]] -> [a]
const flatten = concatMap (id)
// isArray :: a -> Bool
const isArray = Array.isArray
// deepFlatten :: [[a]] -> [a]
const deepFlatten = concatMap (x => isArray(x) ? deepFlatten(x) : x)
// your sample data
let data = [0, [1, [2, [3, [4, 5], 6]]], [7, [8]], 9]
console.log(deepFlatten(data))
// [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ]
console.log(flatten(data))
// [ 0, 1, [ 2, [ 3, [ 4, 5 ], 6 ] ], 7, [ 8 ], 9 ]
First you'll see I made two different flattening procedures. flatten to flatten one level of nesting, and deepFlatten to flatten an arbitrarily deep array.
You'll also see I use Array.prototype.map and Array.prototype.reduce since these are provided by ECMAScript but that doesn't mean you're only limited to using procedures that you have. You can make your own procedures to fill the gaps. Here we made concatMap which is a useful generic provided by other languages such as Haskell.
Utilizing these generics, you can see that deepFlatten is an insanely simple procedure.
// deepFlatten :: [[a]] -> [a]
const deepFlatten = concatMap (x => isArray(x) ? deepFlatten(x) : x)
It's made up of a single expression including a lambda made up of a single if branch (by use of the ternary operator ?:)
Maybe it's a lot to take in, but hopefully it demonstrates that "writing a recursive procedure" isn't always about complicated setup of state variables and complex logic to control the recursion. In this case, it's a simple
if (condition) recurse else don't
If you have any questions, let me know. I'm happy to help you in any way I can.
In fact recursive coding is very simple and every aspects of it shall be contained in the function body. Any info that needs to pass shall be sent through arguments to the next recursion. Usage of global thingies is very ugly and shall be avoided. Accordingly i would simply do the in place array flattening job as follows;
var list = [1,2,3,4,5,6,[1,2,3,4,5,[1,2,3],[1,2,[9,8,7],3,4]]];
function flatArray(arr){
for (var i = 0, len = arr.length; i < len; i++)
Array.isArray(arr[i]) && (arr.splice(i,0,...flatArray(arr.splice(i,1)[0])), len = arr.length);
return arr;
}
console.log(flatArray(list));
I wonder if this question has a functional programmiong approach. I have a literal object and some keys:
var obj= {'a':20, 'b':44, 'c':70};
var keys = ['a','c'];
And I want to obtain:
{'a':20', 'c':70}
But without for loop. I tried:
_.object(keys, ._map(keys, function(key){return obj[key];}))
Giving the result but it seems quite complex ..
Since you use underscore.js, try method _.pick(), which was implemented specifically for that:
var obj = {
'a': 20,
'b': 44,
'c': 70
};
var keys = ['a', 'c'];
console.log( _.pick(obj, keys) );
// Object {a: 20, c: 70}
<script src="http://underscorejs.org/underscore-min.js"></script>
You can do it with .reduce():
var extracted = keys.reduce(function(o, k) {
o[k] = obj[k];
return o;
}, {});
The .reduce() method (known as "inject" or "fold" in some other languages) iterates through the values of the array. Each one is passed to the function along with the initial value passed as the second parameter. The function does whatever it needs to do with each array entry and returns the value to be passed on the next iteration.
The pattern above is pretty typical: start with an empty object and add to it with each function call.