Search code examples
typescript

Shouldn't `string & any[]` result in `never`?


I've noticed something weird with TypeScript. I've got a type union which contains some array types (string[], number[]) and some non-array types (string, number). If I use type inference, everything works as expected:

type bar = string | number | string[] | number[];
declare foo: bar;

if (Array.isArray(foo))
{
    foo // string[] | number[]
}
else
{
    foo // string | number
}

But if I want to restrict the type directly to array types and use a type intersection, I get something I didn't expect:

declare foo: bar & any[];

// expected type: string[] | number[]

foo // (string & any[]) | (number & any[]) | (string[] & any[]) | (number[] & any[])

Why is that?
Shouldn't string & any[] evaluate to never and string[] & any[] to string[]?

[link to playground]


Solution

  • It's reasonable to expect that intersections of completely disjoint types should evaluate to never, given the intuition that the intersection of two non-overlapping sets is empty. This has been requested before (see ms/TS#18210) at various times.

    In fact this reduction to never has been partially implemented (see ms/TS#18438) since TypeScript 2.6. Specifically, a type like ("a" | "b") & "c" becomes never, while "a" & "c" does not. This was done so that combining unions and intersections wouldn't lead to enormous union types.

    But the description of the pull request introducing this implementation gives some insight into the answer to your question: "why doesn't the compiler do this all the time"? The quotes below are from Anders Hejlsberg, one of the head maintainers/architects of the TypeScript language.

    Here's one issue he mentioned:

    We could in theory be more aggressive about removing empty intersection types, but we don't want to break code that uses intersections to "tag" primitive types.

    This "tagging" or "branding" is a way to simulate nominal typing (see ms/TS#202) in TypeScript. TypeScript uses structural typing to compare types, meaning the compiler doesn't distinguish two types A and B if they have the same shape. Sometimes you want to be able to make two otherwise-identical types be treated differently by the compiler (the default behavior in a nominally-typed language like Java, where just the names A and B are enough to distinguish the types). Well, if you intersect one of the types with some extra property like type AA = A & {randomPropName: any}, now you can distinguish AA from B. This sort of branding is mentioned a lot, and even used in the TypeScript compiler code itself.

    So somewhere people are relying on string & {hoobydooby: true} to be distinguished from string & {scoobydooby: false}. If both of those are reduced to never, everything breaks. So they don't do that.


    Another issue he mentioned:

    We allow such types to exist primarily to make it easier to discover their origin (e.g. an intersection of object types containing two properties with the same name).

    So if you have some type like {foo: string} & {foo: number}, this could be reduced to {foo: never} or even just never (after all, no value of type {foo: never} should exist), but I guess error messages become less understandable:

    interface A {foo: string}
    interface B {foo: number}
    type C = A & B;
    const c: C = {foo: "hello"}; // error! string is not assignable to string & number;
    

    That gives you some idea that something expects foo to be both a string and a number, which is impossible, but points you to investigate the C type. Otherwise:

    const c: C = {foo: "hello"}; // error! string is not assignable to never
    

    This is less understandable, I guess.

    Personally I think that this a weaker reason than the first one, but it's part of the "definitive" answer to your question.


    There are other reasons why the compiler doesn't perform operations that developers want it to; the most generic reason is time. Even if you can show that a hypothetical compiler operation doesn't break anyone's code and helps your use case, you need to demonstrate that it doesn't seriously damage the compiler's performance. In this case, how aggressively should the compiler check for possible reductions of intersections to never? If A & B is, in general, not very likely to reduce to never, then most of your checks for this will be wasted effort. So the check had better be very quick.

    This performance issue turns out to be a very common reason why feature proposals and suggestions get turned down or ultimately don't make it into the language. I don't see it specifically listed in any discussion of this particular issue, but I'd be very surprised if it's not a big factor.