Exhaustiveness checks without explicit strings - javascript

Assigning to (or asserting) never at the end of a function is a technique used in Typescript in order to force exhaustive checks at compile time.
For the compiler to detect this, however, it requires explicit strings to check against for determining if the function definitively returns before the assignment/assertion of never.
Would it be possible to introduce some sort of typed variation of Object.freeze that only works on object literals, and further up the chain, so that something like the following could be done?
Even better, is there a way to create an interface wherein the keys are automatically those of each of Action.type (in this example)? If that were the case - actionMap could simply be declared as that interface, which would force the check at compiletime.
Both are solutions to the same problem... given only a discriminated union, is it possible to do exhaustiveness checks like this, at compiletime, without needing to use explicit strings in the function?
interface Increment {
type: 'increment'
}
interface Decrement {
type: 'decrement'
}
type Action = Increment | Decrement
const inc: Increment = { type: 'increment' };
const dec: Decrement = { type: 'decrement' };
//this would be a typescript variation
const actionMap = Object.freeze({
[inc.type]: n => n + 1,
[dec.type]: n => n-1
});
function doAction(action: Action, val: number): number {
if(actionMap[action.type]) {
return actionMap[action.type](val);
}
//this would error at compile time if the above checked failed
const _exhaustiveCheck: never = action;
}
console.log(doAction(inc, 1));
console.log(doAction(dec, 1));

There is a fairly straight forward way to make a map that guarantees that it has a value for each case in a discriminated union. You simply have to set it so that the type of its keys is the discriminated union identifier.
type ActionMap = {
[P in Action["type"]]: (val:number)=>number
};
You can then implement this interface which will look something like this:
var map: ActionMap = {
decrement: n => n - 1,
increment: n=> n + 1
}
Edit: After a bunch of messing around I found a much more versatile and powerful solution that lets you not only type the keys of the discriminated union values but also allow you to type the payload.
First: Define your union in the form of key:type pairs. (I think this is cleaner to read anyway)
type Actions = {
"increment": { incrementValue: number }
"decrement": { decrementValue: number }
}
Second: Create an Action Discriminated Union from that map. This isn't the clearest code in the world, what it does is for each key value pair in ActionsMap create a new type by adding a type value {type:key} then sum all those types together to create your Discriminated Union.
type Action = {
[P in keyof Actions]: { type: P } & ActionsMap[P]
}[keyof Actions];
Third:Create a type for your map
type ActionsMap = {
[P in keyof Actions]: (val:number,action:Actions[P])=>number
}
Forth: Enjoy your entirely type safe action/reducer map!
const map:ActionsMap = {
decrement: (val, action) => val + action.decrementValue,
increment: (val, action) => val + action.incrementValue,
}
Fair warning. This very much pushes the limits on what the typescript definition can do and I personally have been bitten by relying on some of typescripts fringe behavior only to have it be changed in the next version.

Related

In TypeScript, how can I use enum as keys in a type but with additional fields? [duplicate]

Existing JavaScript code has "records" where the id is numeric and the other attributes string.
Trying to define this type:
type T = {
id: number;
[key:string]: string
}
gives error 2411 id type number not assignable to string
There is no specific type in TypeScript that corresponds to your desired structure. String index signatures must apply to every property, even the manually declared ones like id. What you're looking for is something like a "rest index signature" or a "default property type", and there is an open suggestion in GitHub asking for this: microsoft/TypeScript#17867. A while ago there was some work done that would have enabled this, but it was shelved (see this comment for more info). So it's not clear when or if this will happen.
You could widen the type of the index signature property so it includes the hardcoded properties via a union, like
type WidenedT = {
id: number;
[key: string]: string | number
}
but then you'd have to test every dynamic property before you could treat it as a string:
function processWidenedT(t: WidenedT) {
t.id.toFixed(); // okay
t.random.toUpperCase(); // error
if (typeof t.random === "string") t.random.toUpperCase(); // okay
}
The best way to proceed here would be if you could refactor your JavaScript so that it doesn't "mix" the string-valued bag of properties with a number-valued id. For example:
type RefactoredT = {
id: number;
props: { [k: string]: string };
}
Here id and props are completely separate and you don't have to do any complicated type logic to figure out whether your properties are number or string valued. But this would require a bunch of changes to your existing JavaScript and might not be feasible.
From here on out I'll assume you can't refactor your JavaScript. But notice how clean the above is compared to the messy stuff that's coming up:
One common workaround to the lack of rest index signatures is to use an intersection type to get around the constraint that index signatures must apply to every property:
type IntersectionT = {
id: number;
} & { [k: string]: string };
It sort of kind of works; when given a value of type IntersectionT, the compiler sees the id property as a number and any other property as a string:
function processT(t: IntersectionT) {
t.id.toFixed(); // okay
t.random.toUpperCase(); // okay
t.id = 1; // okay
t.random = "hello"; // okay
}
But it's not really type safe, since you are technically claiming that id is both a number (according to the first intersection member) and a string (according to the second intersection member). And so you unfortunately can't assign an object literal to that type without the compiler complaining:
t = { id: 1, random: "hello" }; // error!
// Property 'id' is incompatible with index signature.
You have to work around that further by doing something like Object.assign():
const propBag: { [k: string]: string } = { random: "" };
t = Object.assign({ id: 1 }, propBag);
But this is annoying, since most users will never think to synthesize an object in such a roundabout way.
A different approach is to use a generic type to represent your type instead of a specific type. Think of writing a type checker that takes as input a candidate type, and returns something compatible if and only if that candidate type matches your desired structure:
type VerifyT<T> = { id: number } & { [K in keyof T]: K extends "id" ? unknown : string };
This will require a generic helper function so you can infer the generic T type, like this:
const asT = <T extends VerifyT<T>>(t: T) => t;
Now the compiler will allow you to use object literals and it will check them the way you expect:
asT({ id: 1, random: "hello" }); // okay
asT({ id: "hello" }); // error! string is not number
asT({ id: 1, random: 2 }); // error! number is not string
asT({ id: 1, random: "", thing: "", thang: "" }); // okay
It's a little harder to read a value of this type with unknown keys, though. The id property is fine, but other properties will not be known to exist, and you'll get an error:
function processT2<T extends VerifyT<T>>(t: T) {
t.id.toFixed(); // okay
t.random.toUpperCase(); // error! random not known to be a property
}
Finally, you can use a hybrid approach that combines the best aspects of the intersection and generic types. Use the generic type to create values, and the intersection type to read them:
function processT3<T extends VerifyT<T>>(t: T): void;
function processT3(t: IntersectionT): void {
t.id.toFixed();
if ("random" in t)
t.random.toUpperCase(); // okay
}
processT3({ id: 1, random: "hello" });
The above is an overloaded function, where callers see the generic type, but the implementation sees the intersection type.
Playground link to code
You are getting this error since you have declared it as Indexable Type (ref: https://www.typescriptlang.org/docs/handbook/interfaces.html#indexable-types) with string being the key type, so id being a number fails to conform to that declaration.
It is difficult to guess your intention here, but may be you wanted something like this:
class t {
id: number;
values = new Map<string, string>();
}
I had this same issue, but returned the id as a string.
export type Party = { [key: string]: string }
I preferred to have a flat type and parseInt(id) on the receiving code.
For my API, the simplest thing that could possibly work.

How to extract a single type from a Zod union type?

I'm using Zod and have an array containing different objects using a union. After parsing it I want to iterate through each item and extract it's "real" type / cut off the other types.
When checking for specific object properties, the following code works fine:
const objectWithNumber = zod.object({ num: zod.number() });
const objectWithBoolean = zod.object({ isTruthy: zod.boolean() });
const myArray = zod.array(zod.union([objectWithNumber, objectWithBoolean]));
const parsedArray = myArray.parse([{ isTruthy: true }, { num: 3 }]);
parsedArray.forEach((item) => {
if ("num" in item) {
console.info('objectWithNumber:', item);
// TS knows about it => syntax support for objectWithNumber
} else if ("isTruthy" in item) {
console.info('objectWithBoolean:', item);
// TS knows about it => syntax support for objectWithBoolean
} else {
console.error('unknown');
}
});
An alternative would be using discriminated unions for this
const objectWithNumber = zod.object({ type: zod.literal("objectWithNumber"), num: zod.number() });
const objectWithBoolean = zod.object({ type: zod.literal("objectWithBoolean"), isTruthy: zod.boolean() });
const myArray = zod.array(zod.discriminatedUnion("type", [ objectWithNumber, objectWithBoolean ]));
const parsedArray = myArray.parse([{ type: "objectWithBoolean", isTruthy: true }, { type: "objectWithNumber", num: 3 }]);
parsedArray.forEach(item => {
if (item.type === "objectWithNumber") {
console.info('objectWithNumber:', item);
// TS knows about it => syntax support for objectWithNumber
} else if (item.type === "objectWithBoolean") {
console.info('objectWithBoolean:', item);
// TS knows about it => syntax support for objectWithBoolean
} else {
console.error('unknown');
}
});
but I think I misunderstood this concept because there is just more code to write ( I can always add a shared property and inspect that one ). Any help on this is much appreciated :)
Are there better ways to identify a specific schema?
If I understood you correctly, your question boils down to "Why one should use discriminated unions instead of shared fields combined with optional fields". (zod.js just lifts this concept into runtime providing validation functionality). In your example, there is in fact no reason to use a discriminator (type property), because each object holds only a single non-optional property that is mutually exclusive between types and can be easily used to distinguish types. However, the main issue with this code is that the object names (or shapes/structures) do not communicate any intentions - that's why it's difficult to see the benefits of discriminated unions.
You can think of discriminated union as a type that describes a family of objects and provides an unified mechanism (in a form of a discriminator property) to identify them. This approach is less fragile than checking for the existence of some manually picked properties (like for instance num property). What if num for any reason becomes optional? Then your check for num existence will break.
Another argument for discriminated union is to decrease the number of optional properties. Compare two following examples:
// shared and optional fields instead of discriminated union
type Vehicle = {
name: string;
combustionEngine?: PetrolEngine | DieselEngine;
tankCapacity?: number;
electricEngine?: ElectricEngine;
batteryCapacity?: number;
}
type Vehicle =
| GasolineCar
| ElectricCar
type = GasolineCar {
kind: "gasolineCar";
name: string;
engine: PetrolEngine | DieselEngine;
tankCapacity: number;
}
type ElectricCar = {
kind: "electricCar"
name: string;
engine: ElecticEngine;
batteryCapacity: number;
}
The example with discriminated union produces much more descriptive code. You don't have to add multiple checks for optional fields - instead, just identify (as early as possible) type by discriminator and pass the object to a function/method accepting the more narrow type (GasolineCar or ElectricCar instead of Vehicle).

Interface accepting any number of key value objects [duplicate]

Existing JavaScript code has "records" where the id is numeric and the other attributes string.
Trying to define this type:
type T = {
id: number;
[key:string]: string
}
gives error 2411 id type number not assignable to string
There is no specific type in TypeScript that corresponds to your desired structure. String index signatures must apply to every property, even the manually declared ones like id. What you're looking for is something like a "rest index signature" or a "default property type", and there is an open suggestion in GitHub asking for this: microsoft/TypeScript#17867. A while ago there was some work done that would have enabled this, but it was shelved (see this comment for more info). So it's not clear when or if this will happen.
You could widen the type of the index signature property so it includes the hardcoded properties via a union, like
type WidenedT = {
id: number;
[key: string]: string | number
}
but then you'd have to test every dynamic property before you could treat it as a string:
function processWidenedT(t: WidenedT) {
t.id.toFixed(); // okay
t.random.toUpperCase(); // error
if (typeof t.random === "string") t.random.toUpperCase(); // okay
}
The best way to proceed here would be if you could refactor your JavaScript so that it doesn't "mix" the string-valued bag of properties with a number-valued id. For example:
type RefactoredT = {
id: number;
props: { [k: string]: string };
}
Here id and props are completely separate and you don't have to do any complicated type logic to figure out whether your properties are number or string valued. But this would require a bunch of changes to your existing JavaScript and might not be feasible.
From here on out I'll assume you can't refactor your JavaScript. But notice how clean the above is compared to the messy stuff that's coming up:
One common workaround to the lack of rest index signatures is to use an intersection type to get around the constraint that index signatures must apply to every property:
type IntersectionT = {
id: number;
} & { [k: string]: string };
It sort of kind of works; when given a value of type IntersectionT, the compiler sees the id property as a number and any other property as a string:
function processT(t: IntersectionT) {
t.id.toFixed(); // okay
t.random.toUpperCase(); // okay
t.id = 1; // okay
t.random = "hello"; // okay
}
But it's not really type safe, since you are technically claiming that id is both a number (according to the first intersection member) and a string (according to the second intersection member). And so you unfortunately can't assign an object literal to that type without the compiler complaining:
t = { id: 1, random: "hello" }; // error!
// Property 'id' is incompatible with index signature.
You have to work around that further by doing something like Object.assign():
const propBag: { [k: string]: string } = { random: "" };
t = Object.assign({ id: 1 }, propBag);
But this is annoying, since most users will never think to synthesize an object in such a roundabout way.
A different approach is to use a generic type to represent your type instead of a specific type. Think of writing a type checker that takes as input a candidate type, and returns something compatible if and only if that candidate type matches your desired structure:
type VerifyT<T> = { id: number } & { [K in keyof T]: K extends "id" ? unknown : string };
This will require a generic helper function so you can infer the generic T type, like this:
const asT = <T extends VerifyT<T>>(t: T) => t;
Now the compiler will allow you to use object literals and it will check them the way you expect:
asT({ id: 1, random: "hello" }); // okay
asT({ id: "hello" }); // error! string is not number
asT({ id: 1, random: 2 }); // error! number is not string
asT({ id: 1, random: "", thing: "", thang: "" }); // okay
It's a little harder to read a value of this type with unknown keys, though. The id property is fine, but other properties will not be known to exist, and you'll get an error:
function processT2<T extends VerifyT<T>>(t: T) {
t.id.toFixed(); // okay
t.random.toUpperCase(); // error! random not known to be a property
}
Finally, you can use a hybrid approach that combines the best aspects of the intersection and generic types. Use the generic type to create values, and the intersection type to read them:
function processT3<T extends VerifyT<T>>(t: T): void;
function processT3(t: IntersectionT): void {
t.id.toFixed();
if ("random" in t)
t.random.toUpperCase(); // okay
}
processT3({ id: 1, random: "hello" });
The above is an overloaded function, where callers see the generic type, but the implementation sees the intersection type.
Playground link to code
You are getting this error since you have declared it as Indexable Type (ref: https://www.typescriptlang.org/docs/handbook/interfaces.html#indexable-types) with string being the key type, so id being a number fails to conform to that declaration.
It is difficult to guess your intention here, but may be you wanted something like this:
class t {
id: number;
values = new Map<string, string>();
}
I had this same issue, but returned the id as a string.
export type Party = { [key: string]: string }
I preferred to have a flat type and parseInt(id) on the receiving code.
For my API, the simplest thing that could possibly work.

Generic, higher order functions

I'm trying to add Flow type information to a small library of mine.
The library defines some functions that are generic over Object, Array, Set, Map and other types.
Here a small piece example to give an idea:
function set( obj, key, value ) {
if( isMap(obj) ) { obj.set(key, value); }
else if( isSet(obj) ) { obj.add(value); }
else { obj[key] = value; }
}
function instantiateSameType( obj ) {
if( isArray(obj) ) { return []; }
else if( isMap(obj) ) { return new Map(); }
else if( isSet(obj) ) { return new Set(); }
else { return {}; }
}
function forEach( obj, fn ) {
if( obj.forEach ) obj.forEach( ( value, key )=>fn(value, key, obj) );
else Object.entries(obj).forEach( ([key, value])=>fn(value, key, obj) );
}
function map( obj, fn ) {
const result = instantiateSameType( obj );
forEach(obj, (value, key)=>{
set( result, key, fn(value, key, this) );
});
return result;
}
How can I define types for map?
I'd want to avoid giving a specialized version for each of the 4 types I listed in the example, as map is generic over them.
I feel the need to define higher-order interfaces, and implement them for existing types, but can't find much about any of this...
Any hints or ideas?
Update 2017-11-28: fp-ts is the successor to flow-static-land. fp-ts is a newer library by the same author. It supports both Flow and Typescript.
There is a library, flow-static-land, that does something quite similar to what you are attempting. You could probably learn some interesting things by looking at that code and reading the accompanying blog posts by #gcanti. I'll expand on the strategy in flow-static-land; but keep in mind that you can implement your iteration functions without higher-kinded types if you are OK with a closed set of iterable types.
As #ftor mentions, if you want polymorphic functions that can work on an open set of collection types then you want higher-kinded types (HKTs). Higher-kinded types are types that take type parameters, but with one or more of those parameters left unspecified. For example arrays in Flow take a type parameter to specify the type of elements in the array (Array<V>), and the same goes for maps (Map<K, V>). Sometimes you want to be able to refer to a parameterized type without specifying all of its type parameters. For example map should be able to operate on all arrays or maps regardless of their type parameters:
function map<K, A, B, M: Array<_> | Map<K, _>>(M<A>, fn: A => B): M<B>
In this case M is a variable representing a higher-kinded type. We can pass M around as a first-class type, and fill in its type parameter with different types at different times. Flow does not natively support HKTs, so the syntax above does not work. But it is possible to fake HKTs with some type alias indirection, which is what flow-static-land does. There are details in the blog post, Higher kinded types with Flow.
To get a fully-polymorphic version of map, flow-static-land emulates Haskell type classes (which rely on HKTs). map is the defining feature of a type class called Functor; flow-static-land has this definition for Functor (from Functor.js):
export interface Functor<F> {
map<A, B>(f: (a: A) => B, fa: HKT<F, A>): HKT<F, B>
}
The HKT type is flow-static-land's workaround for implementing higher-kinded types. The actual higher-kinded type is F, which you can think of as standing in for Array or Map or any type that could implement map. Expressions like HKT<F, A> can be thought of as F<A> where the higher-kinded type F has been applied to the type parameter A. (I'm doing some hand waving here - F is actually a type-level tag. But the simplified view works to some extent.)
You can create an implementation of Functor for any type. But there is a catch: you need to define your type in terms of HKT so that it can be used as a higher-kinded type. In flow-static-land in the module Arr.js we see this higher-kinded version of the array type:
class IsArr {} // type-level tag, not used at runtime
export type ArrV<A> = Array<A>; // used internally
export type Arr<A> = HKT<IsArr, A>; // the HKT-compatible array type
If you do not want to use Arr<A> in place of Array<A> everywhere in your code then you need to convert using inj: (a: Array<A>) => Arr<A> and prj: (fa: Arr<A>) => Array<A>. inj and prj are type-level transformers - at runtime both of those functions just return their input, so they are likely to be inlined by the JIT. There is no runtime difference between Arr<A> and Array<A>.
A Functor implementation for Arr looks like this:
const arrFunctor: Functor<IsArr> = {
function map<A, B>(f: (a: A) => B, fa: Arr<A>): Arr<B> {
const plainArray = prj(f)
const mapped = plainArray.map(f)
return inj(mapped)
}
}
In fact the entire Arr.js module is an Arr implementation for Functor, Foldable, Traversable, and other useful type classes. Using that implementation with polymorphic code looks like this:
import * as Arr from 'flow-static-land/lib/Arr'
import { type Foldable } from 'flow-static-land/lib/Foldable'
import { type Functor } from 'flow-static-land/lib/Functor'
import { type HKT } from 'flow-static-land/lib/HKT'
type Order = { items: string[], total: number }
// this code is polymorphic in that it is agnostic of the collection kind
// that is given
function computeTotal<F> (
f: Foldable<F> & Functor<F>,
orders: HKT<F, Order>
): number {
const totals = f.map(order => order.total, orders)
return f.reduce((sum, total) => sum + total, 0, totals)
}
// calling the code with an `Arr<Order>` collection
const orders = Arr.inj([{ items: ['foo', 'bar'], total: 23.6 }])
const t = computeTotal(Arr, orders)
computeTotal needs to apply map and reduce to its input. Instead of constraining the input to a given collection type, computeTotal uses its first argument to constrain its input to types that implement both Foldable and Functor: f: Foldable<F> & Functor<F>. At the type-level the argument f acts as a "witness" to prove that the given collection type implements both map and reduce. At runtime f provides references to the specific implementations of map and reduce to be used. At the entry point to the polymorphic code (where computeTotal is called with a statically-known collection type) the Foldable & Functor implementation is given as the argument Arr. Because Javascript is not designed for type classes the choice of Arr must be given explicitly; but Flow will at least throw an error if you try to use an implementation that is incompatible with the collection type that is used.
To round this out here is an example of a polymorphic function, allItems, that accepts a collection, and returns a collection of the same kind. allItems is agnostic of the specific type of collection that it operates on:
import { type Monad } from 'flow-static-land/lib/Monad'
import { type Monoid, concatAll } from 'flow-static-land/lib/Monoid'
import { type Pointed } from 'flow-static-land/lib/Pointed'
// accepts any collection type that implements `Monad` & `Monoid`, returns
// a collection of the same kind but containing `string` values instead of
// `Order` values
function allItems<F> (f: Monad<F> & Monoid<*>, orders: HKT<F, Order>): HKT<F, string> {
return f.chain(order => fromArray(f, order.items), orders)
}
function fromArray<F, A> (f: Pointed<F> & Monoid<*>, xs: A[]): HKT<F, A> {
return concatAll(f, xs.map(f.of))
}
// called with an `Arr<Order>` collection
const is = allItems(Arr, orders)
chain is flow-static-land's version of flatMap. For every element in a collection, chain runs a callback that must produce a collection of the same kind (but it could hold a different value type). That produces effectively a collection of collections. chain then flattens that to a single level for you. So chain is basically a combination of map and flatten.
I included fromArray because the callback given to chain must return the same kind of collection that allItems accepts and returns - returning an Array from the chain callback will not work. I used a Pointed constraint in fromArray to get the of function, which puts a single value into a collection of the appropriate kind. Pointed does not appear in the constraints of allItems because allItems has a Monad constraint, and every Monad implementation is also an implementation of Pointed, Chain, Functor, and some others.
I am personally a fan of flow-static-land. The functional style and use of HKTs result in code with better type safety than one could get with object-oriented style duck typing. But there are drawbacks. Error messages from Flow can become very verbose when using type unions like Foldable<F> & Functor<F>. And the code style requires extra training - it will seem super weird to programmers who are not well acquainted with Haskell.
I wanted to follow up with another answer that matches up with the question that you actually asked. Flow can do just what you want. But it does get a bit painful implementing functions that operate on all four of those collection types because in the case of Map the type for keys is fully generic, but for Array the key type must be number, and due to the way objects are implemented in Javascript the key type for Object is always effectively string. (Set does not have keys, but that does not matter too much because you do not need to use keys to set values in a Set.) The safest way to work around the Array and Object special cases would be to provide an overloaded type signature for every function. But it turns out to be quite difficult to tell Flow that key might be the fully-generic type K or string or number depending on the type of obj. The most practical option is to make each function fully generic in the key type. But you have to remember that these functions will fail if you try to use arrays or plain objects with the wrong key type, and you will not get a type error in those cases.
Let's start with a type for the set of collection types that you are working with:
type MyIterable<K, V> = Map<K, V> | Set<V> | Array<V> | Pojo<V>
type Pojo<V> = { [key: string]: V } // plain object
The collection types must all be listed at this point. If you want to work with an open set of collection types instead then see my other answer. And note that my other answer avoids the type-safety holes in the solution here.
There is a handy trick with Flow: you can put the keyword %checks in the type signature of a function that returns a boolean, and Flow will be able to use invocations of that function at type-checking time for type refinements. But the body of the function must use constructions that Flow knows how to use for type refinements because Flow does not actually run the function at type-checking time. For example:
function isMap ( obj: any ): boolean %checks {
return obj instanceof Map
}
function isSet ( obj: any ): boolean %checks {
return obj instanceof Set
}
function isArray ( obj: any ): boolean %checks {
return obj instanceof Array
}
I mentioned you would need a couple of type casts. One instance is in set: Flow knows that when assigning to an array index, the index variable should be a number, and it also knows that K might not be number. The same goes for assigning to plain object properties, since the Pojo type alias specifies string keys. So in the code branch for those cases you need to type-cast key to any, which effectively disables type checking for that use of key.
function set<K, V>( obj: MyIterable<K, V>, key: K, value: V ) {
if( isMap(obj) ) { obj.set(key, value); }
else if( isSet(obj) ) { obj.add(value); }
else { obj[(key:any)] = value; }
}
Your instantiateSameType function just needs a type signature. An important point to keep in mind is that you use instantiateSameType to construct the result of map, and the type of values in the collection can change between the input and output when using map. So it is important to use two different type variables for the value type in the input and output of instantiateSameType as well. You might also allow instantiateSameType to change the key type; but that is not required to make map work correctly.
function instantiateSameType<K, A, B>( obj: MyIterable<K, A> ): MyIterable<K, B> {
if( isArray(obj) ) { return []; }
else if( isMap(obj) ) { return new Map(); }
else if( isSet(obj) ) { return new Set(); }
else { return {}; }
}
That means that the output of instantiateSameType can hold any of values. It might be the same type as values in the input collection, or it might not.
In your implementation of forEach you check for the presence of obj.forEach as a type refinement. This is confusing to Flow because one of the types that make up MyIterable is a plain Javascript object, which might hold any string key. Flow cannot assume that obj.forEach will be falsy. So you need to use a different check. Re-using the isArray, etc. predicates works well:
function forEach<K, V, M: MyIterable<K, V>>( obj: M, fn: (value: V, key: K, obj: M) => any ) {
if( isArray(obj) || isMap(obj) || isSet(obj) ) {
obj.forEach((value, key) => fn(value, (key:any), obj));
} else {
for (const key of Object.keys(obj)) {
fn(obj[key], (key:any), obj)
}
}
}
There are two more issues to point out: Flow's library definition for Object.entries looks like this (from core.js):
declare class Object {
/* ... */
static entries(object: any): Array<[string, mixed]>;
/* ... */
}
Flow assumes that the type of values returned by Object.entries will be mixed, but that type should be V. The fix for this is to get values via object property access in a loop.
The type of the key argument to the given callback should be K, but Flow knows that in the array case that type will actually be number, and in the plain object case it will be string. A couple more type casts are necessary to fix those cases.
Finally, map:
function map<K, A, B, M: MyIterable<K, A>>(
obj: M, fn: (value: A, key: K, obj: M) => B
): MyIterable<K, B> {
const result = instantiateSameType( obj );
forEach(obj, (value, key)=>{
set( result, key, fn(value, key, this) );
});
return result;
}
Some things that I want to point out here: the input collection has a type variable A while the output collection has the variable B. This is because map might change the type of values. And I set up a type variable M for the type of the input collection; that is to inform Flow that the type of the callback argument obj is the same as the type of the input collection. That allows you to use functions in your callback that are particular to the specific collection type that you provided when invoking map.

How to properly type function that convert array of objects into object of objects

I'm trying this syntax using .reduce:
function arrToObject<T: {key: string}, R: {[string]: T}>(list: Array<T>): R {
return list.reduce((result: R, item: T): R => {
result[item.key] = item;
return result;
}, {});
}
But Flow gives the following error:
call of method `reduce`. Function cannot be called on any member of intersection type
The root problem is in reconciling the generic type R with {}, the initial value of the accumulator.
A hacky fix would to prevent Flow from trying to reconcile the types at all:
function arrToObject<T: {key: string}, R: {[string]: T}>(list: Array<T>): R {
var accum: any = {};
return list.reduce((result: R, item: T): R => {
result[item.key] = item;
return result;
}, accum);
}
But there are problems with that, as we'll find out further down. Things gets a little clearer when we explicitly say what type the accumulator is:
function arrToObject<T: {key: string}, R: {[string]: T}>(list: Array<T>): R {
let accum: R = {};
return list.reduce((result: R, item: T): R => {
result[item.key] = item;
return result;
}, accum);
}
This forces the problematic type reconciliation into a simpler statement without so much noise around it. And it produces a much better error:
3: let accum: R = {};
^ object literal. This type is incompatible with
3: let accum: R = {};
^ some incompatible instantiation of `R`
This error doesn't tell the whole story, but it brings us closer to what is really going on.
Flow requires that the generic parameter R can be any type compatible with the constraint. Valid instantiations of R could therefore be more general than the constraints you put on it, for example having additional required fields.
This creates a problem. Inside the function body, you can't possibly know what the actual instantiation of R looks like. So you can't construct one. Even though the error messages are sucky, Flow is correct to stop you from doing this! What if someone called your function with an R instantiated to be something with an extra field, for example {[string]: {key: string}, selected: boolean}? Then you would have to have somehow known to initialise the accumulator with a selected: boolean field.
If you are able to, a much better solution than the hack above is to remove the generics altogether:
type KeyMap = {[string]: {key: string}};
type Item = {key: string};
function arrToObject(list: Array<Item>): KeyMap {
return list.reduce((result:KeyMap, item: Item): KeyMap => {
result[item.key] = item;
return result;
}, {});
}
I also introduced some type aliases to prevent the function signature getting out of control.
Now the generics are replaced with concrete types, things are much simpler. The body of the function can create a new initial value for the accumulator because its exact type is known.
If you really need the generic type parameters for some reason, you can also work around this by passing the accumulator's initial value as another argument to arrToObject. That will work because the caller of your arrToObject will know the concrete type and be able to instantiate it.
#PeterHall's answer identifies the reason for the failed typechecking correctly, but the solution is not optimal. As the answer says, the problem is that R may be any type satisfying {[string]: T}, but what's required is actually that type exactly. In contrast, a generic type parameter is needed to preserve item types when items go into and out of the arrToObject function, with {key: string} being the minimum signature required of the inputs.
This signature works:
function arrToObject<T: {key: string}>(list: Array<T>): {[string]: T}
Or, with named types:
type KeyMap<T> = {[string]: T};
type Item = {key: string};
function arrToObject<T: Item>(list: Array<T>): KeyMap<T>

Categories