I can see that TensorFlow Lite is using flatbuffers by default and documentation page notes that in fact it's more efficient.
Why isn't TensorFlow using it by default?
Probably because the team didn't know of its existence when they started it. FlatBuffers is a relatively new technology, whereas Protocol Buffers has been in use at Google almost since the start, and is used for everything by default.