Can arbitrary objects be made reactive in Vuex? - javascript

I am looking for ways to optimize sorting normalized objects by a relationship. Let's pretend that one has an app that needs to sort
I have a Vuex store that contains many normalized objects, like so:
state: {
worms: {
3: { id: 3, name: 'Slurms McKenzie', measurements: [1, 6, 9] },
4: { id: 4, name: 'Memory worm', measurements: [3, 4, 12] },
6: { id: 6, name: 'Alaskan Bull Worm', measurements: [5, 7, 14]},
...
},
measurements: {
1: { id: 1, length: 5.2, timestamp: ...},
2: { id: 2, length: 3.4, timestamp: ...},
3: { id: 3, length: 5.4, timestamp: ...},
...
},
};
Say I need to sort my worms on the timestamp that they reached their highest length. Being steeped in Vue's reactivity, I would love to be able to defined a getter on each worm, like this:
const getters = {
longestLength: {
get() { return $store.getters
.measurements(this.measurements)
.sort(...)[0] },
},
timestampForLongest: {
get() { return this.longestLength.timestamp }
}
worm.extend(getters);
I could then easily and quickly sort on timestampForLongest assuming the value is cached.
I have a great entry point to call this extend (or whatever it ends up being called), but I have a few challenges.
The way I handle this now is by calculating a denormalized map and then sorting based on this. The latency is ~700ms on my 8th gen Intel processor in Chrome, which I'd really like to cut down on.
I don't know how to invoke Vue's reactivity system manually. I believe that I need to define getters that call something like measurement.__ob__.dep.depend() but I haven't wrapped my head around it.
The API to achieve this may be private and subject to change. Is Vue just too slow to handle 800+ rows?
I don't know how to keep the Vuex store ($store) in scope for the getters. I could probably use arrow functions, so I'm not as worried about this.
Can I calculate and cache values on demand in plain javascript objects using Vue?

Hopefully this is somewhere close to what you had in mind.
If I've understood correctly, your intent was to create 'computed properties' (or getters of some kind) called longestLength and timestampForLongest on each worm. These would derive their values based on the measurements in the state.
I've attempted to do this by making each worm a Vue instance. Obviously there's a lot of other functionality that a Vue instance provides, such as rendering, that isn't needed in this case. In Vue 2 there isn't any way to single out just the bits you need. Rumour has it Vue 3 may be more modular in this regard. The only bits we need are observable data (which could be implemented using Vue.observable) and computed properties (which are only available via a Vue instance). For what it's worth, this is how Vuex works behind the scenes, creating a separate Vue instance and plugging into the data, computed, etc..
While the code below looks long, much of it is concerned with generating suitable test data. I initially generate data with measurements nested inside the worms and then pull it out to the format you've specified inside my mutation. Each instance inside worms is converted to a Vue instance before it is added to the state.
I've added // This bit is important comments to particularly important sections to make it easier to pick them out of the noise.
// This bit is important
const Worm = Vue.extend({
computed: {
longestLength () {
let longest = null
for (const id of this.measurements) {
const measurement = store.state.measurements[id]
if (!longest || measurement.length > longest.length) {
longest = measurement
}
}
return longest
},
timestampForLongest () {
return this.longestLength.timestamp
}
}
})
const state = {
worms: {},
measurements: {}
};
const mutations = {
populate (state, worms) {
const wormState = {}
const measurementsState = {}
let measurementId = 0
for (const worm of worms) {
const measurementIds = []
for (const measurement of worm.measurements) {
measurementId++
measurementIds.push(measurementId)
measurementsState[measurementId] = {id: measurementId, ...measurement}
}
// This bit is important
wormState[worm.id] = new Worm({
data: {...worm, measurements: measurementIds}
})
}
state.worms = wormState
state.measurements = measurementsState
}
};
const getters = {
// This bit is important
sortedWorms (state) {
return Object.values(state.worms).sort((wormA, wormB) => wormA.timestampForLongest - wormB.timestampForLongest)
}
};
const actions = {
populateWorms ({commit}) {
const worms = []
for (let wIndex = 0; wIndex < 800; ++wIndex) {
const measurements = []
for (let mIndex = 0; mIndex < 3; ++mIndex) {
measurements.push({
length: Math.round(Math.random() * 100) / 10,
timestamp: Math.round(Math.random() * 1e6)
})
}
worms.push({
measurements,
name: 'Worm ' + wIndex,
id: wIndex
})
}
commit('populate', worms)
}
}
const store = new Vuex.Store({
state,
mutations,
getters,
actions
})
new Vue({
el: '#app',
store,
computed: {
sortedWorms () {
return this.$store.getters.sortedWorms
}
},
methods: {
go () {
this.$store.dispatch('populateWorms')
}
}
})
<script src="https://unpkg.com/vue#2.6.10/dist/vue.js"></script>
<script src="https://unpkg.com/vuex#3.1.1/dist/vuex.js"></script>
<div id="app">
<button #click="go">Go</button>
<div v-for="worm in sortedWorms">
{{ worm.name }} - {{ worm.longestLength }}
</div>
</div>
Whether or not this is actually a good way to implement all of this given your underlying requirement of optimum sorting I'm not so sure. However, it seemed as close as I could get to your intent of implementing computed properties on each worm.

I would suggest you a totally different approach:
1. Avoid sorting by any means
2. Instead, have properties corresponding to max_length and max_time on each worm object and update them (max properties) whenever a new observation is posted (or recorded) for that worm
This way, you can avoid sorting each time.

The code you provided has syntax errors, they had to be corrected:
const states = {
worms: {
3: {
id: 3,
name: 'Slurms McKenzie',
measurements: [1, 6, 9]
},
4: {
id: 4,
name: 'Memory worm',
measurements: [3, 4, 12]
},
6: {
id: 6,
name: 'Alaskan Bull Worm',
measurements: [5, 7, 14]
}
},
measurements: {
1: {
id: 1,
length: 5.2,
timestamp: 'ts1'
},
2: {
id: 2,
length: 3.4,
timestamp: 'ts2'
},
3: {
id: 3,
length: 5.4,
timestamp: 'ts3'
},
}
}
const store = new Vuex.Store({
state: states,
getters: {
getWorms: state => {
return state.worms
},
getLongestLengthByMeasurementId: state => ids => {
const mapped = ids.map(id => {
const measurement = state.measurements[id]
if (measurement) {
return {
length: measurement.length || 0,
timestamp: measurement.timestamp || 0
}
} else {
return {
length: 0,
timestamp: 0
}
}
})
return mapped.find(item => item.length === Math.max.apply(null, mapped.map(item => item.length))).timestamp
}
},
mutations: {
// setting timestamp in store.state.worms[wormId]
setLongestLength(state, wormId) {
if (state.worms[wormId] && typeof state.worms[wormId].timestamp !== 'undefined') {
// update the timestamp
} else {
// get and set the timestamp
const ts = store.getters.getLongestLengthByMeasurementId(state.worms[wormId].measurements)
Vue.set(state.worms[wormId], 'timestamp', ts)
}
},
},
actions: {
// set timestamp worm by worm
setLongestLength({
commit
}, wormId) {
Object.keys(store.getters.getWorms).forEach(key =>
commit('setLongestLength', parseInt(key, 10))
)
}
}
})
const app = new Vue({
store,
el: '#app',
mounted() {
store.dispatch('setLongestLength')
console.log('worms', store.state.worms)
}
})
<script src="https://unpkg.com/vue"></script>
<script src="https://unpkg.com/vuex"></script>
<script src="https://unpkg.com/axios/dist/axios.min.js"></script>
<div id="app">
<div v-for="worm in $store.state.worms">Timestamp by worm (ID {{worm.id}}): {{worm.timestamp}}</div>
</div>
You only need to add get: in your getter if you also use a set:.

Usually when I make a Vue application with a lot of data such as yours, I'll do something like this:
const vm = new Vue({
data() {
return {
worms: [
{id: 1,name: "Slurms McKenzie",measurements: [1, 6, 9]},
{id: 2,name: "Memory worm",measurements: [3, 4, 12]},
{id: 3,name: "Alaskan Bull Worm",measurements: [5, 7, 14]}
],
measurements: [
{id: 1,length: 5.2,timestamp: 123},
{id: 2,length: 3.4,timestamp: 456},
{id: 3,length: 5.4,timestamp: 789}
]
};
},
computed: {
sortedByLength() {
return [...this.measurements]
.sort((a, b) => a.length - b.length)
.map(measurement => measurement.id)
.map(id => this.worms.find(worm => worm.id === id));
},
timestampForLongest() {
return this.sortedByLength[0].timestamp;
}
}
});
Vue will update computed properties that change, and cache them otherwise. All you need to do is convert this into state/getters for Vuex and the principles are the same.
Storing them as arrays is much easier to handle than as objects. If you must use objects, you could probably use the lodash library to help you sort without it being obnoxious.

Probably, this might help. I don't know the whole structure of your project but I tried to recreated here and this is one approach.
You've defined a state that contains worms and measurements lists. Each worm has a measurement list of indexes, which I suppose it is related with the measurements list.
Now state should be defined inside of the your Vuex store. Now, your store will have four main elements which includes: state, getters, actions, and mutations.
So the state, in essence, can be viewed as a single source of truth for the entire application. But how can our components and routes access the data stored in our state? Well, the getters will return the data from our store back to our components, in this case we want to get the sortedByTSDec and sortedByTSAsc methods.
So, now you have figured out how to get data from the state, let’s see how we can set data into our state. You must be think I can define setters, right? Well, no, Vuex "setters" are named slightly different. You should define a mutation to set data into your state.
Finally, actions are similar to mutations, but instead of mutating the state directly they commit a mutation. Confused? Just, think about actions like asynchronous functions while mutations are synchronous.
In this example, I don't know where the worms data is being generation it can be from another server, a database, and so on. So the generateData action will request and wait for the data and when the data is ready will call the populate mutation to populate the state.
So what about the Worn class?
Here is where the magic happens. The Vue.extend() method create a subclass of the base Vue constructor. But why? Because this subclass has a data option. When we set this value in the populate mutation with the data of a generated worn. In other words the state.worms contains a list of Worn objects.
Also, we declare the computed properties to calculate the longestLength and timestampForLongest with the data of the instance.
Now if you want to sort the worms list by the timestamp of its longest length then first we need to calculate the longest length, then we use the .sort() method. This method by default sorts values as strings. Therefore, we need to provide a compare function The purpose of the compare function is to define an alternative sort order. This function should return a negative, zero, or positive value, depending on the arguments. In this case we used b.timestampForLongest - a.timestampForLongest for decreasing order, but you can use a.timestampForLongest - b.timestampForLongest for ascending order.
Here is a basic snippet:
const randomDate = function (start, end) {
return new Date(start.getTime() + Math.random() * (end.getTime() - start.getTime())).getTime()/1000;
};
const Worm = Vue.extend({
computed: {
longestLength() {
let longest;
for (const id of this.measurements) {
const measurement = store.state.measurements[id];
if (!longest || measurement.length > longest.length) {
longest = measurement;
}
}
return longest;
},
timestampForLongest() {
return this.longestLength.timestamp
},
},
});
const store = new Vuex.Store({
state: {
worms: {},
measurements: {},
},
actions: {
generateData({commit}) {
const worms = [];
for (let w = 0; w < 800; ++w) {
const measurements = []
for (let m = 0; m < 3; ++m) {
measurements.push({
length: Math.round(Math.random() * 100) / 10,
timestamp: randomDate(new Date(2018, 1, 1), new Date()),
});
}
worms.push({
id: w,
name: 'Worm Name ' + w,
measurements,
});
}
commit('populate', worms)
}
},
mutations: {
populate(state, worms) {
const wormList = {};
const measurementList = {};
let measurementId = 0;
for (let worm of worms) {
const measurementIds = [];
for (let measurement of worm.measurements) {
measurementId++
measurementIds.push(measurementId)
measurementList[measurementId] = {
id: measurementId,
...measurement,
}
}
wormList[worm.id] = new Worm({
data: {
...worm,
measurements: measurementIds,
}
});
}
state.worms = wormList;
state.measurements = measurementList;
}
},
getters: {
sortedByTSDec(state) {
return Object.values(state.worms).sort((a, b) => b.timestampForLongest - a.timestampForLongest);
},
sortedByTSAsc(state) {
return Object.values(state.worms).sort((a, b) => a.timestampForLongest - b.timestampForLongest);
},
},
});
const app = new Vue({
el: '#app',
store,
computed: {
sortedState() {
return this.$store.getters.sortedByTSDec;
}
},
methods: {
calculate() {
this.$store.dispatch('generateData');
},
timestamp2Date(ts) {
let newDate = new Date();
newDate.setTime(ts * 1000);
return newDate.toUTCString();
}
},
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.6.10/vue.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/vuex/3.1.1/vuex.min.js"></script>
<div id="app">
<button v-on:click="calculate">Get the Longest</button>
<div v-for="worm in sortedState">
{{ worm.name }} has its longest length of {{ worm.longestLength.length }}cm at {{ timestamp2Date(worm.longestLength.timestamp) }}
</div>
</div>

Related

The fastest way to find an element in an array with a given property name and replace it

I have a performance issue with NgRx, I have an array with thousands of objects that looks like this (I can't change that structure even I don't like it):
state.alarms structure:
[
{ global: {...} },
{ 282: {...} },
{ 290: {...} },
{ 401: {...} }
etc...
]
addNewAlarm(state, alarm) here alarm object is for example:
{ 282: {...} }
As you can see the object looks something like this { someNumber: nestedObjectForThatNumber }
I'm listening for changes and if some appear I have to replace object where "the key" is given number.
In the case from the screenshot for example I get { 282: {x: 1, y: 2, z: 3} } so I have to replace the item of array with index 1.
In my reducer I've created something like this but it doesn't work as I expected:
export function addNewAlarm(state: State, alarm: AlarmsObject): State | undefined {
const alarms: AlarmsObject[] = [...state.alarms];
if (state) {
const existingRecord = state.alarms.find(alarm1 => alarm1.hasOwnProperty(Object.keys(alarm)[0]));
if (existingRecord) {
const index = state.alarms.indexOf(existingRecord);
alarms[index] = alarm;
}
}
return { ...state, alarms };
}
Maybe someone can give me a hint how to do it in a right way?
you can use findIndex (If not found return -1) but, why not create an object?
stateObj: any = {};
this.state.forEach((x) => {
this.stateObj = { ...this.stateObj, ...x };
});
So you only need use
//Note you needn't return anything
addNewAlarm(stateObj: any, alarm: AlarmsObject){
const key=Object.keys(alarm)[0]
this.stateObj[key]=this.alarm[key]
}
A fool stackblitz

Javascript memoize find array

I'm trying to improve my knowledge regarding memoization in javascript. I have created a memoization function(I think..)
I've got an array of changes(a change log) made to items. Each item in the array contains a reference-id(employeeId) to whom made the edit. Looking something like this.
const changeLog = [
{
id: 1,
employeeId: 1,
field: 'someField',
oldValue: '0',
newValue: '100',
},
{
id: 2,
employeeId: 2,
field: 'anotherField',
oldValue: '20',
newValue: '100',
},
...
]
I've also got an array containing each employee, looking something like this:
const employees = [
{
name: 'Joel Abero',
id: 1
},
{
name: 'John Doe',
id: 2
},
{
name: 'Dear John',
id: 3
}
]
To find the employee who made the change I map over each item in the changeLog and find where employeeId equals id in the employees-array.
Both of these arrays contains 500+ items, I've just pasted fragments.
Below I pasted my memoize helper.
1) How can I perform a test to see which of these two run the fastest?
2) Is this a proper way to use memoization?
3) Is there a rule of thumb when to use memoization? Or should I use it as often as I can?
const employees = [
{
name: 'Joel Abero',
id: 1
},
{
name: 'John Doe',
id: 2
},
{
name: 'Dear John',
id: 3
}
]
const changeLog = [
{
id: 1,
employeeId: 1,
field: 'someField',
oldValue: '0',
newValue: '100',
},
{
id: 2,
employeeId: 2,
field: 'anotherField',
oldValue: '0',
newValue: '100',
},
{
id: 3,
employeeId: 3,
field: 'someField',
oldValue: '0',
newValue: '100',
},
{
id: 4,
employeeId: 3,
field: 'someField',
oldValue: '0',
newValue: '100',
},
{
id: 5,
employeeId: 3,
field: 'someField',
oldValue: '0',
newValue: '100',
}
]
function findEditedByEmployee (employeeId) {
return employees.find(({ id }) => id === employeeId)
}
function editedByWithMemoize () {
let employeesSavedInMemory = {}
return function(employeeId) {
if(employeeId in employeesSavedInMemory) {
console.log("from memory")
return employeesSavedInMemory[employeeId]
}
console.log("not from memory")
const findEditedBy = findEditedByEmployee(employeeId)
employeesSavedInMemory[findEditedBy.id] = {name: findEditedBy.name }
return findEditedBy
}
}
const memoizedEmployee = editedByWithMemoize();
// with memoization
const changeLogWithEmployeesMemoized = changeLog.map( log => {
const employeeName = memoizedEmployee(log.employeeId);
return {
...log,
employeeName: employeeName.name
}
})
// without memoization
const changeLogWithEmployees = changeLog.map( log => {
const editedBy = findEditedByEmployee(log.employeeId);
return {
...log,
employeeName: editedBy.name
}
})
console.log('memoized', changeLogWithEmployeesMemoized)
console.log('not memoized', changeLogWithEmployees)
I'll try to answer each of your questions:
1) How can I perform a test to see which of these two run the fastest?
The best way is just a simple for loop. Take for example a fake API request:
const fakeAPIRequest = id => new Promise(r => setTimeout(r, 100, {id}))
This will take 100ms to complete on request. Using memoization, we can avoid making this 100ms request by checking if we've made this request before:
const cache = {}
const memoizedRequest = async (id) => {
if (id in cache) return Promise.resolve(cache[id])
return cache[id] = await fakeAPIRequest(id)
}
Here's a working example:
const fakeAPIRequest = id => new Promise(r => setTimeout(r, 100, {id}))
const cache = {}
const memoizedRequest = async (id) => {
if (id in cache) return Promise.resolve(cache[id])
return cache[id] = await fakeAPIRequest(id)
}
const request = async (id) => await fakeAPIRequest(id)
const test = async (name, cb) => {
console.time(name)
for (let i = 50; i--;) {
await cb()
}
console.timeEnd(name)
}
test('memoized', async () => await memoizedRequest('test'))
test('normal', async () => await request('test'))
2) Is this a proper way to use memoization?
I'm not entirely sure what you mean by this, but think of it as short-term caching.
Should your memo call include an API request, it could be great for non-changing data, saving plenty of time, but on the other hand, if the data is subject to change within a short period of time, then memoization can be a bad idea, meaning it will shortly be outdated.
If you are making many many calls to this function, it could eat up memory depending on how big the return data is, but this factor is down to implementation, not "a proper way".
3) Is there a rule of thumb when to use memoization? Or should I use it as often as I can?
More often than not, memoization is overkill - since computers are extremely fast, it can often boil down to just saving milliseconds - If you are only calling the function even just a few times, memoization provides little to no benefit. But I do keep emphasising API requests, which can take long periods of time. If you start using a memoized function, you should strive to use it everywhere where possible. Like mentioned before, though, it can eat up memory quickly depending on the return data.
One last point about memoization is that if the data is already client side, using a map like Nina suggested is definitely a much better and more efficient approach. Instead of looping each time to find the object, it loops once to transform the array into an object (or map), allowing you to access the data in O(1) time. Take an example, using find this time instead of the fakeAPI function I made earlier:
const data = [{"id":0},{"id":1},{"id":2},{"id":3},{"id":4},{"id":5},{"id":6},{"id":7},{"id":8},{"id":9},{"id":10},{"id":11},{"id":12},{"id":13},{"id":14},{"id":15},{"id":16},{"id":17},{"id":18},{"id":19},{"id":20},{"id":21},{"id":22},{"id":23},{"id":24},{"id":25},{"id":26},{"id":27},{"id":28},{"id":29},{"id":30},{"id":31},{"id":32},{"id":33},{"id":34},{"id":35},{"id":36},{"id":37},{"id":38},{"id":39},{"id":40},{"id":41},{"id":42},{"id":43},{"id":44},{"id":45},{"id":46},{"id":47},{"id":48},{"id":49},{"id":50},{"id":51},{"id":52},{"id":53},{"id":54},{"id":55},{"id":56},{"id":57},{"id":58},{"id":59},{"id":60},{"id":61},{"id":62},{"id":63},{"id":64},{"id":65},{"id":66},{"id":67},{"id":68},{"id":69},{"id":70},{"id":71},{"id":72},{"id":73},{"id":74},{"id":75},{"id":76},{"id":77},{"id":78},{"id":79},{"id":80},{"id":81},{"id":82},{"id":83},{"id":84},{"id":85},{"id":86},{"id":87},{"id":88},{"id":89},{"id":90},{"id":91},{"id":92},{"id":93},{"id":94},{"id":95},{"id":96},{"id":97},{"id":98},{"id":99}]
const cache = {}
const findObject = id => data.find(o => o.id === id)
const memoizedFindObject = id => id in cache ? cache[id] : cache[id] = findObject(id)
const map = new Map(data.map(o => [o.id, o]))
const findObjectByMap = id => map.get(id)
const list = Array(50000).fill(0).map(() => Math.floor(Math.random() * 100))
const test = (name, cb) => {
console.time(name)
for (let i = 50000; i--;) {
cb(list[i])
}
console.timeEnd(name)
}
test('memoized', memoizedFindObject)
test('normal', findObject)
test('map', findObjectByMap)
All in all, memoization is a great tool, very similar to caching. It provides a big speed up on heavy calculations or long network requests, but can prove ineffective if used infrequently.
I would create a Map in advance and get the object from the map for an update.
If map does not contain a wanted id, create a new object and add it to employees and to the map.
const
employees = [{ name: 'Joel Abero', id: 1 }, { name: 'John Doe', id: 2 }, { name: 'Dear John', id: 3 }],
changeLog = [{ id: 1, employeeId: 1, field: 'someField', oldValue: '0', newValue: '100' }, { id: 2, employeeId: 2, field: 'anotherField', oldValue: '20', newValue: '100' }],
map = employees.reduce((map, o) => map.set(o.id, o), new Map);
for (const { id, field, newValue } of changeLog) {
let object = map.get(id);
if (object) {
object[field] = newValue;
} else {
let temp = { id, [field]: newValue };
employees.push(temp)
map.set(id, temp);
}
}
console.log(employees);
.as-console-wrapper { max-height: 100% !important; top: 0; }
Your memoization process is faulty!
You don't return objects with the same shape
When you don't find an employee in the cache, then you look it up and return the entire object, however, you only memoize part of the object:
employeesSavedInMemory[findEditedBy.id] = {name: findEditedBy.name }
So, when you find the employee in cache, you return a cut-down version of the data:
const employees = [ { name: 'Joel Abero', id: 1 }, { name: 'John Doe', id: 2 }, { name: 'Dear John', id: 3 } ]
function findEditedByEmployee (employeeId) {
return employees.find(({ id }) => id === employeeId)
}
function editedByWithMemoize () {
let employeesSavedInMemory = {}
return function(employeeId) {
if(employeeId in employeesSavedInMemory) {
console.log("from memory")
return employeesSavedInMemory[employeeId]
}
console.log("not from memory")
const findEditedBy = findEditedByEmployee(employeeId)
employeesSavedInMemory[findEditedBy.id] = {name: findEditedBy.name }
return findEditedBy
}
}
const memoizedEmployee = editedByWithMemoize();
const found = memoizedEmployee(1);
const fromCache = memoizedEmployee(1);
console.log("found:", found); //id + name
console.log("fromCache:", fromCache);//name
You get different data back when calling the same function with the same parameters.
You don't return the same objects
Another big problem is that you create a new object - even if you change to get a complete copy, the memoization is not transparent:
const employees = [ { name: 'Joel Abero', id: 1 }, { name: 'John Doe', id: 2 }, { name: 'Dear John', id: 3 } ]
function findEditedByEmployee (employeeId) {
return employees.find(({ id }) => id === employeeId)
}
function editedByWithMemoize () {
let employeesSavedInMemory = {}
return function(employeeId) {
if(employeeId in employeesSavedInMemory) {
console.log("from memory")
return employeesSavedInMemory[employeeId]
}
console.log("not from memory")
const findEditedBy = findEditedByEmployee(employeeId)
employeesSavedInMemory[findEditedBy.id] = { ...findEditedBy } //make a copy of all object properties
return findEditedBy
}
}
const memoizedEmployee = editedByWithMemoize();
memoizedEmployee(1)
const found = memoizedEmployee(1);
const fromCache = memoizedEmployee(1);
console.log("found:", found); //id + name
console.log("fromCache:", fromCache); //id + name
console.log("found === fromCache :", found === fromCache); // false
The result is basically the same you get "different" data, in that the objects are not the same one. So, for example, if you do:
const employees = [ { name: 'Joel Abero', id: 1 }, { name: 'John Doe', id: 2 }, { name: 'Dear John', id: 3 } ]
function findEditedByEmployee (employeeId) {
return employees.find(({ id }) => id === employeeId)
}
function editedByWithMemoize () {
let employeesSavedInMemory = {}
return function(employeeId) {
if(employeeId in employeesSavedInMemory) {
console.log("from memory")
return employeesSavedInMemory[employeeId]
}
console.log("not from memory")
const findEditedBy = findEditedByEmployee(employeeId)
employeesSavedInMemory[findEditedBy.id] = { ...findEditedBy } //make a copy of all object properties
return findEditedBy
}
}
const memoizedEmployee = editedByWithMemoize();
const original = employees[0];
const found = memoizedEmployee(1);
found.foo = "hello";
console.log("found:", found); //id + name + foo
const fromCache = memoizedEmployee(1);
console.log("fromCache 1:", fromCache); //id + name
fromCache.bar = "world";
console.log("fromCache 2:", fromCache); //id + name + bar
console.log("original:", original); //id + name + foo
Compare with a normal implementation
I'll use memoize from Lodash but there are many other generic implementations and they still work the same way, so this is only for reference:
const { memoize } = _;
const employees = [ { name: 'Joel Abero', id: 1 }, { name: 'John Doe', id: 2 }, { name: 'Dear John', id: 3 } ]
function findEditedByEmployee (employeeId) {
return employees.find(({ id }) => id === employeeId)
}
const memoizedEmployee = memoize(findEditedByEmployee);
const original = employees[0];
const found = memoizedEmployee(1);
console.log("found 1:", found); //id + name
found.foo = "hello";
console.log("found 2:", found); //id + name + foo
const fromCache = memoizedEmployee(1);
console.log("fromCache 1:", fromCache); //id + name + foo
fromCache.bar = "world";
console.log("fromCache 2:", fromCache); //id + name + foo + bar
console.log("original:", original); //id + name + foo + bar
console.log("found === fromCache :", found === fromCache); //true
<script src="https://cdn.jsdelivr.net/npm/lodash#4.17.15/lodash.min.js"></script>
Just a demonstration that the memoization is completely transparent and does not produce any odd or unusual behaviour. Using the memoized function is exactly the same as the normal function in terms of effects. The only difference is the caching but there is no impact on how the function behaves.
Onto the actual questions:
How can I perform a test to see which of these two run the fastest?
Honestly, and personally - you shouldn't. A correct implementation of memoization has known properties. A linear search also has known properties. So, testing for speed is testing two known properties of both algorithms.
Let's dip into pure logic here - we have two things to consider:
the implementation is correct (let's call this P)
properties of implementation are correct (let's call this Q)
We can definitely say that "If the implementation is correct, then properties of implementation are correct", transformable to "if P, then Q" or formally P -> Q. Were we to go in the opposite direction Q -> P and try to test the known properties to confirm the implementation is correct, then we are committing the fallacy of affirming the consequent.
We can indeed observe that testing the speed is not even testing the solution for correctness. You could have incorrect implementation of memoization yet it would exhibit the same speed property of O(n) lookup once and O(1) on repeat reads as correct memoization. So, the test Q -> P will fail.
Instead, you should test the implementation for correctness, if you can prove that, then you can deduce that you'd have constant speed on repeat reads. And O(1) access is going to be (in most cases, especially this one), faster than O(n) lookup.
Consequently, if you don't need to prove correctness, then you can take the rest for granted. And if you use a known implementation of memoization, then you don't need to test your library.
With all that said, there is something you might still need to be aware of. The caching during memoization relies on creating a correct key for the cached item. And this could potentially have a big, even if constant, overhead cost depending on how the key is being derived. So, for example, a lookup for something near the start of the array might take 10ms yet creating the key for the cache might take 15ms, which means that O(1) would be slower. Slower than some cases.
The correct test to verify speed would normally be to check how much time it takes (on average) to lookup the first item in the array, the last item in the array, something from the middle of the array then check how much time it takes to fetch something from cache. Each of these has to be ran several times to ensure you don't get a random spike of speed either up or down.
But I'd have more to say later*
2) Is this a proper way to use memoization?
Yes. Again, assuming proper implementation, this is how you'd do it - memoize a function and then you get a lot of benefits for caching.
With that said, you can see from the Lodash implementation that you can just generalise the memoization implementation and apply it to any function, instead of writing a memoized version of each. This is quite a big benefit, since you only need to test one memoization function. Instead, if you have something like findEmployee(), findDepartment(), and findAddress() functions which you want to cache the results of, then you need to test each memoization implementation.
3) Is there a rule of thumb when to use memoization? Or should I use it as often as I can?
Yes, you should use it as often as you can* (with a huge asterisk)
* (huge asterisk)
This is the biggest asterisk I know how to make using markdown (outside just embedding images). I could go for a slightly bigger one but alas.
You have to determine when you can use it, in order to use it when you can. I'm not just saying this to be confusing - you shouldn't just be slapping memoized functions everywhere. There are situations when you cannot use them. And that's what I alluded to at the end of answering the first question - I wanted to talk about this in a single place:
You have to tread very carefully to verify what your actual usage is. If you have a million items in an array and only the first 10 are looked up faster than being fetched from cache, then there is 0.001% of items that would have no benefit from caching. In that case - you get a benefit from caching...or do you? If you only do a couple of reads per item, and you're only looking up less than a quarter of the items, then perhaps caching doesn't give you a good long term benefit. And if you look up each item exactly two times, then you're doubling your memory consumption for honestly quite trivial improvement of speed overall. Yet, what if you're not doing in-memory lookup from an array but something like a network request (e.g., database read)? In that case caching even for a single use could be very valuable.
You can see how a single detail can swing wildly whether you should use memoization or not. And often times it's not even that clear when you're initially writing the application, since you don't even know how often you might end up calling a function, what value you'd feed it, nor how often you'd call it with the same values over and over again. Even if you have an idea of what the typical usage might be, you still will need a real environment to test with, instead of just calling a non-memoized and a memoized version of a function in isolation.
Eric Lippert has an an amazing piece on performance testing that mostly boils down to - when performance matters, try to test real application with real data, and real usage. Otherwise your benchmark might be off for all sorts of reasons.
Even if memoization is clearly "faster" you have to consider memory usage. Here is a slightly silly example to illustrate memoization eating up more memory than necessary:
const { memoize } = _;
//stupid recursive function that removes 1 from `b` and
//adds 1 to `a` until it finds the total sum of the two
function sum (a, b) {
if(b)
return sum(a + 1, b - 1)
//only log once to avoid spamming the logs but show if it's called or not
console.log("sum() finished");
return a;
}
//memoize the function
sum = memoize(sum);
const result = sum(1, 999);
console.log("result:", result);
const resultFromCache1 = sum(1, 999); //no logs as it's cached
console.log("resultFromCache1:", resultFromCache1);
const resultFromCache2 = sum(999, 1); //no logs as it's cached
console.log("resultFromCache2:", resultFromCache2);
const resultFromCache3 = sum(450, 550); //no logs as it's cached
console.log("resultFromCache3:", resultFromCache3);
const resultFromCache4 = sum(42, 958); //no logs as it's cached
console.log("resultFromCache4:", resultFromCache4);
<script src="https://cdn.jsdelivr.net/npm/lodash#4.17.15/lodash.min.js"></script>
This will put one thousand cached results in memory. Yes, the function memoized is silly and doing a lot of unnecessary calls, which means there is a lot of memory overhead. Yet at the same time, if we re-call it with any arguments that sum up to 1000, then we immediately get the result, without having to do any recursion.
You can easily have similar real code, even if there is no recursion involved - you might end up calling some function a lot of times with a lot of different inputs. This will populate the cache with all results and yet whether that is useful or not is still up in the air.
So, if you can you should be using memoization. The biggest problem is finding out if you can.

Creating a JavaScript function that filters out duplicate in-memory objects?

Okay, so I am trying to create a function that allows you to input an array of Objects and it will return an array that removed any duplicate objects that reference the same object in memory. There can be objects with the same properties, but they must be different in-memory objects. I know that objects are stored by reference in JS and this is what I have so far:
const unique = array => {
let set = new Set();
return array.map((v, index) => {
if(set.has(v.id)) {
return false
} else {
set.add(v.id);
return index;
}
}).filter(e=>e).map(e=>array[e]);
}
Any advice is appreciated, I am trying to make this with a very efficient Big-O. Cheers!
EDIT: So many awesome responses. Right now when I run the script with arbitrary object properties (similar to the answers) and I get an empty array. I am still trying to wrap my head around filtering everything out but on for objects that are referenced in memory. I am not positive how JS handles objects with the same exact key/values. Thanks again!
Simple Set will do the trick
let a = {'a':1}
let b = {'a': 1,'b': 2, }
let c = {'a':1}
let arr = [a,b,c,a,a,b,b,c];
function filterSameMemoryObject(input){
return new Set([...input])
}
console.log(...filterSameMemoryObject(arr))
I don't think you need so much of code as you're just comparing memory references you can use === --> equality and sameness .
let a = {'a':1}
console.log(a === a ) // return true for same reference
console.log( {} === {}) // return false for not same reference
I don't see a good reason to do this map-filter-map combination. You can use only filter right away:
const unique = array => {
const set = new Set();
return array.filter(v => {
if (set.has(v.id)) {
return false
} else {
set.add(v.id);
return true;
}
});
};
Also if your array contains the objects that you want to compare by reference, not by their .id, you don't even need to the filtering yourself. You could just write:
const unique = array => Array.from(new Set(array));
The idea of using a Set is nice, but a Map will work even better as then you can do it all in the constructor callback:
const unique = array => [...new Map(array.map(v => [v.id, v])).values()]
// Demo:
var data = [
{ id: 1, name: "obj1" },
{ id: 3, name: "obj3" },
{ id: 1, name: "obj1" }, // dupe
{ id: 2, name: "obj2" },
{ id: 3, name: "obj3" }, // another dupe
];
console.log(unique(data));
Addendum
You speak of items that reference the same object in memory. Such a thing does not happen when your array is initialised as a plain literal, but if you assign the same object to several array entries, then you get duplicate references, like so:
const obj = { id: 1, name: "" };
const data = [obj, obj];
This is not the same thing as:
const data = [{ id: 1, name: "" }, { id: 1, name: "" }];
In the second version you have two different references in your array.
I have assumed that you want to "catch" such duplicates as well. If you only consider duplicate what is presented in the first version (shared references), then this was asked before.

How to set value of an immutable state in Javascript?

Given an immutable state like this:
alerts: {
5a8c76171bbb57b2950000c4: [
{
_id:5af7c8652552070000000064
device_id:5a8c76171bbb57b2950000c4
count: 1
},
{
_id:5af7c8722552070000000068
device_id:5a8c76171bbb57b2950000c4
count: 2
}
]
}
and an object like this:
{
_id:5af7c8652552070000000064
device_id:5a8c76171bbb57b2950000c4
count: 2
}
I want to replace the object with the same id in the alerts state (immutable), such that end result looks like this:
alerts: {
5a12356ws13tch: [
{
_id:5af7c8652552070000000064
device_id:5a8c76171bbb57b2950000c4
count: 2
},
{
_id:5af7c8722552070000000068
device_id:5a8c76171bbb57b2950000c4
count: 2
}
]
}
How can I do that? With mergeDeep, getIn, setIn, and updateIn, found on List, Map or OrderedMap ?
I tried doing something like this.. where index is 0 and deviceId is 5a12356ws13tch
Does not work though.
export const oneAlertFetched = (state, {deviceId, index, alert}) => state.setIn(['alerts', deviceId, index], alert).merge({fetching: false})
I tried this as well. Does not work.
export const oneAlertFetched = (state, {deviceId, index, alert}) => {
const a = state.alerts[deviceId][index]
state.alerts[deviceId][index] = Object.assign({}, a, alert)
return
}
By immutable, you mean that your property is non-writable.
If you want to modify your object in-place (not recommended), you will need the property to be at least configurable:
const device = alerts['5a12356ws13tch'][0];
if (Object.getOwnPropertyDescriptor(device, 'count').configurable) {
// Manually make it `writable`
Object.defineProperty(device, 'count', {
writable: true
});
// Update property's value
device.count++;
// Set it back to `non-writable`
Object.defineProperty(device, 'count', {
writable: false
});
}
console.log(device.count); // 2
If it is not configurable (cannot make it writable), or you do not want to jeopardize your application (it must be non-writable on purpose), then you should work on copies.
const device = alerts['5a12356ws13tch'][0];
alerts['5a12356ws13tch'][0] = Object.assign({}, device, {count: device.count + 1});
Object.assign() works on flat objects. If you need deep copy, have a look at my SO answer there.
I think you mean you want to return a new object with the updated payload?
function getNextAlerts(alerts, parentDeviceId, payload) {
const alertsForDevice = alerts[parentDeviceId];
if (!alertsForDevice || alertsForDevice.length === 0) {
console.log('No alerts for device', deviceId);
return;
}
return {
...alerts,
[parentDeviceId]: alerts[parentDeviceId].map(item =>
item._id === payload._id ? payload : item
),
}
}
const alerts = {
'5a12356ws13tch': [
{
_id: '5af7c8652552070000000064',
device_id: '5a8c76171bbb57b2950000c4',
count: 1
},
{
_id: '5af7c8722552070000000068',
device_id: '5a8c76171bbb57b2950000c4',
count: 2
}
]
};
const nextAlerts = getNextAlerts(alerts, '5a12356ws13tch', {
_id: '5af7c8652552070000000064',
device_id: '5a8c76171bbb57b2950000c4',
count: 2,
});
console.log('nextAlerts:', nextAlerts);
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.min.js"></script>
If you're working with plain JavaScript objects and want to keep "immutable" approach you have to use spreads all over the nested structure of state object.
But, there are some tools already targeting this issue - lenses.
Here is the example of both approaches, array/object spreads and lenses - ramda repl.
In short, your example via spreads:
const oneAlertFetched = (state, { deviceId, index, alert }) => ({
...state,
alerts: {
...state.alerts,
[deviceId]: [
...state.alerts[deviceId].slice(0, index),
{ ...state.alerts[deviceId][index], ...alert },
...state.alerts[deviceId].slice(index + 1)
],
}
})
And via lenses using Ramda's over, lensPath, merge and __*:
const oneAlertFetched = (state, { deviceId, index, alert }) =>
R.over(
R.lensPath(['alerts', deviceId, index]),
R.merge(R.__, alert),
state
)
* R.__ placeholder used to swap 1st & 2nd parameters of R.merge
PS: lenses solution is intentionally adjusted to match the declaration of your function, so you can easily compare two approaches. However, in real life, with such powerful and flexible tool, we can rewrite the function to be more readable, reusable, and performant.

How do I swap array elements in an immutable fashion within a Redux reducer?

The relevant Redux state consists of an array of objects representing layers.
Example:
let state = [
{ id: 1 }, { id: 2 }, { id: 3 }
]
I have a Redux action called moveLayerIndex:
actions.js
export const moveLayerIndex = (id, destinationIndex) => ({
type: MOVE_LAYER_INDEX,
id,
destinationIndex
})
I would like the reducer to handle the action by swapping the position of the elements in the array.
reducers/layers.js
const layers = (state=[], action) => {
switch(action.type) {
case 'MOVE_LAYER_INDEX':
/* What should I put here to make the below test pass */
default:
return state
}
}
The test verifies that a the Redux reducer swaps an array's elements in immutable fashion.
Deep-freeze is used to check the initial state is not mutated in any way.
How do I make this test pass?
test/reducers/index.js
import { expect } from 'chai'
import deepFreeze from'deep-freeze'
const id=1
const destinationIndex=1
it('move position of layer', () => {
const action = actions.moveLayerIndex(id, destinationIndex)
const initialState = [
{
id: 1
},
{
id: 2
},
{
id: 3
}
]
const expectedState = [
{
id: 2
},
{
id: 1
},
{
id: 3
}
]
deepFreeze(initialState)
expect(layers(initialState, action)).to.eql(expectedState)
})
One of the key ideas of immutable updates is that while you should never directly modify the original items, it's okay to make a copy and mutate the copy before returning it.
With that in mind, this function should do what you want:
function immutablySwapItems(items, firstIndex, secondIndex) {
// Constant reference - we can still modify the array itself
const results= items.slice();
const firstItem = items[firstIndex];
results[firstIndex] = items[secondIndex];
results[secondIndex] = firstItem;
return results;
}
I wrote a section for the Redux docs called Structuring Reducers - Immutable Update Patterns which gives examples of some related ways to update data.
You could use map function to make a swap:
function immutablySwapItems(items, firstIndex, secondIndex) {
return items.map(function(element, index) {
if (index === firstIndex) return items[secondIndex];
else if (index === secondIndex) return items[firstIndex];
else return element;
}
}
In ES2015 style:
const immutablySwapItems = (items, firstIndex, secondIndex) =>
items.map(
(element, index) =>
index === firstIndex
? items[secondIndex]
: index === secondIndex
? items[firstIndex]
: element
)
There is nothing wrong with the other two answers, but I think there is even a simpler way to do it with ES6.
const state = [{
id: 1
}, {
id: 2
}, {
id: 3
}];
const immutableSwap = (items, firstIndex, secondIndex) => {
const result = [...items];
[result[firstIndex], result[secondIndex]] = [result[secondIndex], result[firstIndex]];
return result;
}
const swapped = immutableSwap(state, 2, 0);
console.log("Swapped:", swapped);
console.log("Original:", state);

Categories