It has occurred to me that the char type in java can be entirely replaced with integer types (and leaving the character literals for programmers' convenience). This would allow for flexibility of storage size, as ASCII only takes one byte and Unicode beyond the Basic Multilingual Plane requires more than two bytes. If a character is just a two-byte number like the short type, why is there a separate type for it?
Nothing is 100% necessary in a programming language; we could all use BCPL if we really wanted to. (BTW, I learned that well enough to write fizzbuzz a few years ago and recommend doing that. It's a language with an interesting viewpoint and/or historical perspective.)
The question is, does char simplify or improve programming? Put in your terms, is it worth one byte per character to save the if/then complexity inherent in using byte for some characters and short for others? I think the answer to that is cheap: Even for a novel-length string that's only half a megabyte, or about one cent's worth of RAM. Or compared to using short: does having a separate unsigned 16-byte type improve or simplify anything over using a signed 16-byte type for holding unicode code points, so that the character "ꙁ" would be a negative number in java and positive in reality? A matter of judgment.