I have two APIs to work with and they can't be changed. One of them returns type like this:
{
type: 25
}
and to other API I should send type like this:
{
type: 'Computers'
}
where 25 == 'Computers'. What I want to have is a map of numeric indices to the string value like this:
{
'1': 'Food',
'2': 'Something',
....
'25': 'Computers'
....
}
I am not sure why, but it doesn't feel right to have such map with numeric value to string, but maybe it is completely fine? I tried to Google the answer, but couldn't find anything specific. In one place it says that it is fine, in another some people say that it's better not to have numeric values as object keys. So, who is right and why? Could somebody help me with this question?
Thanks :)
There's nothing wrong with it, but I can understand how it might look a little hinky. One alternative is to have an array of objects each with their own id that you can then filter/find on:
const arr = [ { id: 1, label: 'Food' }, { id: 2, label: 'Something' }, { id: 25, label: 'Computers' } ];
const id = 25;
function getLabel(arr, id) {
return arr.find(obj => obj.id === id).label;
}
console.log(getLabel(arr, id));
You can use the Map object for this if using regular object feels "weird".
const map = new Map()
map.set(25, 'Computers');
map.set(1, 'Food');
// then later
const computers = map.get(25);
// or loop over the map with
map.forEach((id, category) => {
console.log(id, category);
});
Quick Update:
As mentioned by others, using objects with key=value pairs is OK.
In the end, everything in javascript is an object(including arrays)
Using key-value pairs or Map has 1 big advantage( in some cases it makes a huge difference ), and that is having an "indexed" data structure. You don't have to search the entire array to find what you are looking for.
const a = data[id];
is nearly instant, whereas if you search for an id in an array of objects...it all depends on your search algorithm and the size of the array.
Using an "indexed" object over an array gives much better performance if dealing with large arrays that are constantly being updated/searched by some render-loop function.
Map has the advantage of maintaining the insertion order of key-value pairs and it also only iterates over the properties that you have set. When looping over object properties, you have to check that the property belongs to that object and is not "inherited" through prototype chain( hasOwnProperty)
m = new Map()
m.set(5, 'five');
m.set(1, 'one');
m.set(2, 'two');
// some other function altered the same object
m.__proto__.test = "test";
m.forEach((id, category) => {
console.log(id, category);
});
/*
outputs:
five 5
one 1
two 2
*/
o = {};
o[5] = 'five';
o[1] = 'one';
o[2] = 'two';
// something else in the code used the same object and added a new property
// which you are not aware of.
o.__proto__.someUnexpectedFunction = () => {}
for (key in o) {
console.log(key, o[key]);
}
/*
Output:
1 one
2 two
5 five
someUnexpectedFunction () => {}
*/
Map and objects also have 1 very important advantage(sometimes disadvantage - depending on your needs ). Maps/objects/Sets guarantee that your indexed values are unique. This will automatically remove any duplicates from your result set.
With arrays you would need to check every time if an element is already in the array or not.
Related
I'm currently learning about the reduce method in JS, and while I have a basic understanding of it, more complex code completely throws me off. I can't seem to wrap my head around how the code is doing what it's doing. Mind you, it's not that the code is wrong, it's that I can't understand it. Here's an example:
const people = [
{ name: "Alice", age: 21 },
{ name: "Max", age: 20 },
{ name: "Jane", age: 20 },
];
function groupBy(objectArray, property) {
return objectArray.reduce((acc, obj) => {
const key = obj[property];
const curGroup = acc[key] ?? [];
return { ...acc, [key]: [...curGroup, obj] };
}, {});
}
const groupedPeople = groupBy(people, "age");
console.log(groupedPeople);
// {
// 20: [
// { name: 'Max', age: 20 },
// { name: 'Jane', age: 20 }
// ],
// 21: [{ name: 'Alice', age: 21 }]
// }
Now the reduce method as I understand it, takes an array, runs some provided function on all the elements of the array in a sequential manner, and adds the result of every iteration to the accumulator. Easy enough. But the code above seems to do something to the accumulator as well and I can't seem to understand it. What does
acc[key] ?? []
do?
Code like this make it seem like a breeze:
const array1 = [1, 2, 3, 4];
// 0 + 1 + 2 + 3 + 4
const initialValue = 0;
const sumWithInitial = array1.reduce(
(accumulator, currentValue) => accumulator + currentValue,
initialValue
);
console.log(sumWithInitial);
// Expected output: 10
But then I see code like in the first block, I'm completely thrown off. Am I just too dumb or is there something I'm missing???
Can someone please take me through each iteration of the code above while explaining how it
does what it does on each turn? Thanks a lot in advance.
You are touching on a big problem with reduce. While it is such a nice function, it often favors code that is hard to read, which is why I often end up using other constructs.
Your function groups a number of objects by a property:
const data = [
{category: 'catA', id: 1},
{category: 'catA', id: 2},
{category: 'catB', id: 3}
]
console.log(groupBy(data, 'category'))
will give you
{
catA: [{category: 'catA', id: 1}, {category: 'catA', id: 2}],
catB: [{category: 'catB', id: 3}]
}
It does that by taking apart the acc object and rebuilding it with the new data in every step:
objectArray.reduce((acc, obj) => {
const key = obj[property]; // get the data value (i.e. 'catA')
const curGroup = acc[key] ?? []; // get collector from acc or new array
// rebuild acc by copying all values, but replace the property stored
// in key with an updated array
return { ...acc, [key]: [...curGroup, obj] };
}, {});
You might want to look at spread operator (...) and coalesce operator (??)
Here is a more readable version:
objectArray.reduce((groups, entry) => {
const groupId = entry[property];
if(!groups[groupId]){
groups[groupId] = [];
}
groups[groupId].push(entry);
return groups;
}, {});
This is a good example where I would favor a good old for:
function groupBy(data, keyProperty){
const groups = {}
for(const entry of data){
const groupId = entry[keyProperty];
if(!groups[groupId]){
groups[groupId] = [];
}
groups[groupId].push(entry);
}
return groups;
}
Pretty much the same number of lines, same level of indentation, easier to read, even slightly faster (or a whole lot, depending on data size, which impacts spread, but not push).
That code is building an object in the accumulator, starting with {} (an empty object). Every property in the object will be a group of elements from the array: The property name is the key of the group, and the property value is an array of the elements in the group.
The code const curGroup = acc[key] ?? []; gets the current array for the group acc[key] or, if there isn't one, gets a new blank array. ?? is the "nullish coalescing operator." It evaluates to its first operand if that value isn't null or undefined, or its second operand if the first was null or undefined.
So far, we know that obj[property] determines the key for the object being visited, curGroup is the current array of values for that key (created as necessary).
Then return { ...acc, [key]: [...curGroup, obj] }; uses spread notation to create a new accumulator object that has all of the properties of the current acc (...acc), and then adds or replaces the property with the name in key with a new array containing any previous values that the accumulator had for that key (curGroup) plus the object being visited (obj), since that object is in the group, since we got key from obj[property].
Here's that again, related to the code via comments. I've split out the part creating a new array [...curGroup, obj] from the part creating a new accumulator object for clarity:
function groupBy(objectArray, property) {
return objectArray.reduce(
(acc, obj) => {
// Get the value for the grouping property from this object
const key = obj[property];
// Get the known values array for that group, if any, or
// a blank array if there's no property with the name in
// `key`.
const curGroup = acc[key] ?? [];
// Create a new array of known values, adding this object
const newGroup = [...curGroup, obj];
// Create and return a new object with the new array, either
// adding a new group for `key` or replacing the one that
// already exists
return { ...acc, [key]: newGroup };
},
/* The starting point, a blank object: */ {}
);
}
It's worth noting that this code is very much written with functional programming in mind. It uses reduce instead of a loop (when not using reduce, FP usually uses recursion rather than loops) and creates new objects and arrays rather than modifying existing ones.
Outside of functional programming, that code would probably be written very differently, but reduce is designed for functional programming, and this is an example of that.
Just FWIW, here's a version not using FP or immutability (more on immutability below):
function groupBy(objectArray, property) {
// Create the object we'll return
const result = {};
// Loop through the objects in the array
for (const obj of objectArray) {
// Get the value for `property` from `obj` as our group key
const key = obj[property];
// Get our existing group array, if we have one
let group = result[key];
if (group) {
// We had one, add this object to it
group.push(obj);
} else {
// We didn't have one, create an array with this object
// in it and store it on our result object
result[key] = [obj];
}
}
return result;
}
In a comment you said:
I understand the spread operator but it's use in this manner with the acc and the [key] is something I'm still confused about.
Yeah, there are a lot of things packed into return { ...acc, [key]: [...curGroup, obj] };. :-) It has both kinds of spread syntax (... isn't an operator, though it's not particularly important) plus computed property name notation ([key]: ____). Let's separate it into two statements to make it easier to discuss:
const updatedGroup = [...curGroup, obj];
return { ...acc, [key]: updatedGroup };
TL;DR - It creates and returns a new accumulator object with the contents of the previous accumulator object plus a new or updated property for the current/updated group.
Here's how that breaks down:
[...curGroup, obj] uses iterable spread. Iterable spread spreads out the contents of an iterable (such as an array) into an array literal or a function call's argument list. In this case, it's spread into an array literal: [...curGroup, obj] says "create a new array ([]) spreading out the contents of the curGroup iterable at the beginning of it (...curGroup) and adding a new element at the end (, obj).
{ ...acc, ____ } uses object property spread. Object property spread spreads out the properties of an object into a new object literal. The expression { ...acc, _____ } says "create a new object ({}) spreading out the properties of acc into it (...acc) and adding or updating a property afterward (the part I've left as just _____ for now)
[key]: updatedGroup (in the object literal) uses computed property name syntax to use the value of a variable as the property name in an object literal's property list. So instead of { example: value }, which creates a property with the actual name example, computed property name syntax puts [] around a variable or other expression and uses the result as the property name. For instance, const obj1 = { example: value }; and const key = "example"; const obj2 = { [key]: value }; both create an object with a propety called example with the value from value. The reduce code is using [key]: updatedGroup] to add or update a property in the new accumulator whose name comes from key and whose value is the new group array.
Why create a new accumulator object (and new group arrays) rather than just updating the one that the code started with? Because the code is written such that it avoids modifying any object (array or accumulator) after creating it. Instead of modifying one, it always creates a new one. Why? It's "immutable programming," writing code that only ever creates new things rather than modifying existing things. There are good reasons for immutable programming in some contexts. It reduces the possibilities of a change in code in one place from having unexpected ramifications elsewhere in the codebase. Sometimes it's necessary, because the original object is immutable (such as one from Mongoose) or must be treated as though it were immutable (such as state objects in React or Vue). In this particular code it's pointless, it's just style. None of these objects is shared anywhere until the process is complete and none of them is actually immutable. The code could just as easily use push to add objects to the group arrays and use acc[key] = updatedGroup; to add/update groups to the accumulator object. But again, while it's pointless in this code, there are good uses for immutable programming. Functional programming usually adheres to immutability (as I understand it; I haven't studied FP deeply).
How do I search an array for any instances of multiple specified string values?
const arrayOfObjects = [{
name: box1,
storage: ['car', 'goat', 'tea']
},
{
name: box2,
storage: ['camel', 'fox', 'tea']
}
];
arrayOfSearchItems = ['goat', 'car', 'oranges'];
If any one or all of the arrayOfSearchItems is present in one of the objects in my array, I want it to either return false or some other way that I can use to excluded that object that is in my arrayOfObjects from a new, filtered arrayOfObjects without any objects that contained the arrayOfSearchItems string values. In this case I would want an array of objects without box1.
Here is what I have tried to do, based on other suggestions. I spent a long time on this. The problem with this function is that it only works on the first arrayOfSearchItems strings, to exclude that object. It will ignore the second or third strings, and not exclude the object, even if it contains those strings. For example, it will exclude an object with 'goat'. Once that happens though, it will no longer exclude based on 'car'. I have tried to adapt my longer code for the purposes of this question, I may have some typos.
const excludeItems = (arrayOfSearchItems, arrayOfObjects) => {
let incrementArray = [];
let userEffects = arrayOfSearchItems;
let objects = arrayOfObjects;
for (i = 0; i < userEffects.length; i++) {
for (x = 0; x < objects.length; x++) {
if (objects[x].storage.indexOf(userEffects) <= -1) {
incrementArray.push(objects[x]);
}
}
}
return(incrementArray);
}
let filteredArray = excludeItems(arrayOfSearchItems, arrayOfObjects);
console.log(filteredArray);
Thanks for providing some example code. That helps.
Let's start with your function, which has a good signature:
const excludeItems = (arrayOfSearchItems, arrayOfObjects) => { ... }
If we describe what this function should do, we would say "it returns a new array of objects which do not contain any of the search items." This gives us a clue about how we should write our code.
Since we will be returning a filtered array of objects, we can start by using the filter method:
return arrayOfObjects.filter(obj => ...)
For each object, we want to make sure that its storage does not contain any of the search items. Another way to word this is "every item in the starage array does NOT appear in the list of search items". Now let's write that code using the every method:
.filter(obj => {
// ensure "every" storage item matches a condition
return obj.storage.every(storageItem => {
// the "condition" is that it is NOT in the array search items
return arrayOfSearchItems.includes(storageItem) === false);
});
});
Putting it all together:
const excludeItems = (arrayOfSearchItems, arrayOfObjects) => {
return arrayOfObjects.filter(obj => {
return obj.storage.every(storageItem => {
return arrayOfSearchItems.includes(storageItem) === false;
});
});
}
Here's a fiddle: https://jsfiddle.net/3p95xzwe/
You can achieve your goal by using some of the built-in Array prototype functions, like filter, some and includes.
const excludeItems = (search, objs) =>
objs.filter(({storage:o}) => !search.some(s => o.includes(s)));
In other words: Filter my array objs, on the property storage to keep only those that they dont include any of the strings in search.
i have a Question. I was thinking long Time about it, but poorly i donĀ“t find a answer.
I know the every method.
My Question is about this code section:
var tr = order.every((i) => stock[i[0]] >= i[1]);
My Questions are:
stock is an Object. Why i must write as an array?
Why it is i[0] in stock and then i[1] ?
Why this code checks the nested Arrays in const order ?
const order = [
["shirt", 5],
["shoes", 2]
];
const stock = {
shirt: 50,
height: 172,
mass: 120,
shoes: 6
};
var tr = order.every((i) => stock[i[0]] >= i[1]); /// return true
console.log(`tr:`,tr)
So, the square brackets can be used to access element inside the array by passing it's index e.g:
const arr = ["first", "second"];
const secondElement = arr[1] // index 1 means seconds element
and also square brackets can be used to access element inside the object by passing it's key e.g:
const obj = { first: 1, second: 2 };
const secondElement = object.second // Normal way to access value in object
const secondElementWithAnotherSyntax = object['second'] // another syntax, same thing
the cool thing about the other syntax shown is that you can pass variable to it, e.g :
const objKey = 'second'
const secondElement = obj[objKey]
Now let's look at your example, i is one element of the array order, which carries arrays itself, so i is also one of the two small arrays, i[0] is the string word in the beginning of the small arrays, so:
i[0] // is either 'shirt' or 'shoes'
and since stocks is an object that has those keys, you can access for example the value 50 by saying stocks['shirt'] or as in your case, stock[i[0]] ;)
now your second question: why should it be >= i[1] ?
because the order second item , aka i[1] is the number of items required/ordered, so this should always be less that your stock, you can't by 5 shirts from a place that has only 3 in the stock :)
1. stock is an Object. Why i must write as an array?
You can access properties of objects using brackets [].
Why do we need this?
To be able to access properties of objects dynamically, e.g. when you are looping though keys and want to get the values
Object.keys(data).forEach(function(key) {
console.log('Key : ' + key + ', Value : ' + data[key])
})
Sometimes there is no other way to access the value:
const json = {
"id":"1",
"some key with spaces": "48593"
};
console.log(json.some key with spaces); // obviously throws error
console.log(json['some key with spaces']); // prints "48593"
2. Why it is i[0] in stock and then i[1] ?
3. Why this code checks the nested Arrays in const order ?
The code goes through the orders, each order is an array so i[0] is the type of the order and i[1] is the quantity. the code checks if there are enough items in stock. To check if there are enough shirts you would do:
console.log(stock["shirts"] >= 5
Thats what the code in your example does, it just passes the key ("shirts") and quantity (5) dynamically.
May I suggest to try to use more expressive naming of the variables ?
An object property can be accessed through bracket notation, as in stock[orderedProductName] when using a variable - Property accessors
A concise but imho more readable version can be written using destructuring assignment
const order = [
["shirt", 5],
["shoes", 2]
];
const stock = {
shirt: 50,
height: 172,
mass: 120,
shoes: 6,
};
// original version
let inStock = order.every((i) => stock[i[0]] >= i[1]); /// return true
// more verbose version
// check if every item in array order satisfies the condition
// let's cycle over the array calling the element we're working on
// orderItem
inStock = order.every( orderItem => {
const orderedProductName = orderItem[0];
const orderedProductQuantity = orderItem[1];
// to access an object property we can use bracket notation
const stockProductQuantity = stock[orderedProductName];
// the condition to check: do we have enough products in stock ?
return stockProductQuantity >= orderedProductQuantity;
});
// a concise variation could make use of destructuring assignment.
// Here, when we take the order item array, we immediately assign
// each of its elements to the appropriate variable
//
// orderItem[0] or first array element -> productName
// orderItem[1] or second array element -> orderedQuantity
inStock = order.every(([productName, orderedQuantity]) =>
stock[productName] >= orderedQuantity
);
if(inStock) {
console.log('pack and ship');
}
else {
console.log('need to restock');
}
The every() method tests whether all elements in the array pass the test implemented by the provided function. It returns a Boolean value. If you want to read more Array.prototype.every()
In your code snippet you are checking that every item in order array has quantity less than the quantity available in stock.
To access the properties of a object you can use square notation also like arrays. To read more Bracket Notation
If you assigned more meaningful variables to the code you'd probably understand how this works better.
In one order (an array) we have two nested arrays. The first describes shirt/value, the other shoes/value. every is going to see if there is enough stock for both shirt and shoes by checking that the stockValue >= the items in the order.
When you every over the order array the callback for each iteration is one orderItem (['shirt', 5] first, then ['shoes', 2] for the second). We can assign the first element of each array to a variable called itemType, and the second to a variable called itemQty.
So when you see stock[i][0] we can translate that in the new code as stock[orderType] which is using bracket notation to locate the property value associated by that key in the stock object. We then check to see if that value >= than the itemQty.
const order=[["shirt",5],["shoes",2]],stock={shirt:50,height:172,mass:120,shoes:6};
const result = order.every(orderItem => {
const itemType = orderItem[0];
const itemQty = orderItem[1];
return stock[itemType] >= itemQty;
});
console.log(result);
I need to process files that are structured like
Title
/foo/bar/foo/bar 1
/foo/bar/bar 2
/bar/foo 3
/bar/foo/bar 4
It's easy enough to parse this into an array of arrays, by splitting at every / and \n. However, once I get an array of arrays, I can't figure out a good way to turn that into nested objects. Desired format:
{
Title,
{
foo: {
bar: {
foo: {
bar: 1
},
bar: 2
}
},
bar: {
foo: {
3,
bar: 4
}
}
}
}
This seems like a super common thing to do, so I'm totally stumped as to why I can't find any pre-existing solutions. I sort of expected javascript to even have a native function for this, but merging objects apparently overrides values instead of making a nested object, by default. I've tried the following, making use of jQuery.extend(), but it doesn't actually combine like-terms (i.e. parents and grand-parents).
let array = fileContents.split('\n');
let object = array.map(function(string) {
const result = string.split('/').reduceRight((all, item) => ({
[item]: all
}), {});
return result;
});
output = $.extend(true, object);
console.log(JSON.stringify(output));
This turned the array of arrays into nested objects, but didn't merge like-terms...
I could brute-force this, but my real-world problem has over 2000 lines, goes 5 layers deep (/foo/foo/foo/foo value value value), and actually has an array of space-separated values rather than a single value per line. I'm willing to treat the array of values like a string and just pretend its not an array, but it would be really nice to at least get the objects nested properly without writing a hefty/primitive algorithm.
Since this is essentially how subdirectories are organized, it seems like this should be an easy one. Is there a relatively simple solution I'm not seeing?
You could reduce the array of keys/value and set the value by using all keys.
If no key is supplied, it take the keys instead.
const
setValue = (target, keys, value) => {
const last = keys.pop();
keys.reduce((o, k) => o[k] ??= {}, target)[last] = value;
return target;
},
data = 'Title\n/foo/bar/foo/bar 1\n/foo/bar/bar 2\n/bar/foo 3\n/bar/foo/bar 4',
result = data
.split(/\n/)
.reduce((r, line) => {
const [keys, value] = line.split(' ');
return setValue(r, keys.split('/').filter(Boolean), value || keys);
}, {});
console.log(result);
.as-console-wrapper { max-height: 100% !important; top: 0; }
I'm trying to figure out the cleanest way of using the string-similarity library in NodeJS with the 2 arrays used in my project.
The first is an array of objects that look something like this:
{
eventName: "Some event name",
tournamentName: "US Open",
city: "New York"
}
The second array contains objects that looks slightly different, for example:
{
eventName: "Some event name",
temperature: "28",
spectators: "15000"
}
What I'm trying to do is build something that iterates through the first array and finds the closest matching event name in the second array, based of course ONLY on the eventName property using the "string-similarity" NodeJS library.
The below method works really well:
stringSimilarity.findBestMatch(eventName, arrayOfEventNames)
But of course the 2nd parameter requires an array consisting only of event names. I don't have that. I have an array consisting of objects. It's true that one of the properties of these objects is the event name, so what I'm trying to figure out is the best way to pass that in to this function. I built the below function (calling it inside forEach on first array) which basically takes in the name of the event I want to search for and the second array of objects and then creates a new temporary array inside it of ONLY the event names. Then I have the 2 inputs I need to call the stringSimilarity.findBestMatch method.
function findIndexOfMatchingEvent(eventName, arrayToCompareAgainst) {
let onlyEventNames = [];
arrayToCompareAgainst.forEach(e => {
onlyEventNames.push(e.eventName);
});
if (arrayToCompareAgainst.length !== onlyEventNames.length) {
throw new Error("List of events array length doesn't match event names array length!");
}
const bestMatch = stringSimilarity.findBestMatch(eventName, onlyEventNames);
const bestMatchEventName = bestMatch.bestMatch.target;
const bestMatchAccuracyRating = bestMatch.bestMatch.rating;
const index = arrayToCompareAgainst.findIndex(e => {
return e.eventName === bestMatchEventName;
});
if (index === -1) {
throw new Error("Could not find matched event in original event list array");
} else if (bestMatchAccuracyRating >= 0.40) {
return index;
}
}
This works but it feels very wrong to me. I'm creating this new temporary array so many times. If my first array has 200 objects, then for each of those I'm calling my custom function which is then creating this temporary array (onlyEventNames) 200 times as well. And even worse, it's not really connected to the original array in any way, which is why I'm then using .findIndex to go back and find which object inside the array the found event refers to.
Would really appreciate some feedback/advice on this one. Thanks in advance!
In my earlier answer I misunderstood the question.
There's no need to recreate the array of event names for each entry in the other array you want to compare. Create the array of event names once, then reuse that array when looping through the other array's entries. You can create the array of event names the way you did in findIndexOfMatchingEvent, but the more idiomatic way is with map.
Assuming these arrays:
const firstArray = [
{
eventName: "Some event name",
tournamentName: "US Open",
city: "New York"
},
// ...
];
const secondArray = [
{
eventName: "Some event name",
temperature: "28",
spectators: "15000"
},
// ...
];
Then you can do this:
const onlyEventNames = secondArray.map(e => e.eventName);
let bestResult;
let bestRating = 0;
for (const {eventName} of firstArray) {
const result = stringSimilarity.findBestMatch(eventName, onlyEventNames)
if (!bestResult || bestRating < result.rating) {
// Better match
bestResult = secondArray[result.bestMatchIndex];
bestRating = result.rating;
}
}
if (bestRating >= 0.4) {
// Use `bestResult`
}
When done with the loop, bestResult will be the object from the second array that is the best match for the events in the first array, and bestRating will be the rating of that object. (That assumes there are entries in the arrays. If there are no entries in firstArray, bestResult will be undefined and bestRating will be 0; if there aren't any in the second array, I don't know what findBestMatch returns [or if it throws].)
About your specific concerns:
I'm creating this new temporary array so many times.
Yes, that's definitely not ideal (though with 200 elements, it's really not a big problem). That's why in the above I create it only once and reuse it.
...it's not really connected to the original array in any way...
It is: by index. You know for sure that if the match was found at index 2 of onlyEventNames, that match is for index 2 of secondArray. In the code above I grab the entry using the index returned by findBestMatch.