Given a clocked 3-level (-1,0,+1) channel between two devices, what is the most stream-efficient way to convert a stream of bits to and from the channel representation?
The current method is to take 3 binary bits, and convert into two trits. I believe this wastes 11% of the channel capability (since 1 out of 9 possible pairs is never used). I suspect grouping might reduce this waste, but this project is using 8-bit devices, so my group sizes are restricted.
I'd like to use divmod-3, but I don't have the entire binary stream available at any one point. Is there a method for an 'incremental' divmod3 that can start at the LSB?
As an untrained guess, I speculate that there should be an approach of the form 'analyze the next 3 bits, remove one bit, change one bit' -- but I haven't been able to find something workable.
Try to pack 11 bits (2048 codes) into 7 trits (2187 codes), you'll get less than 1% of overhead. There are several methods. First one is straightforward: the lookup table. Second is divmod-3. Third is some bit/trit mainpulation like below.
First stage: pack first 9 bits using 3-bit-to-2-trit scheme:
abc def ghi jk => mn pq rs jk (mn, pq, rs are trit pairs)
bits trits
0ab -> ab
10a -> Za
11a -> aZ (i'll use Z is for -1 for compactness)
state ZZ will be used futher
Second stage: using more complex logic to pack 6 trits and 2 bits into 7 trits:
mn pq rs 0k -> mn pq rs k
mn pq rs 10 -> mn pq rs Z
mn pq rZ 11 -> mn pq ZZ r
mn pq r0 11 -> mn ZZ pq r
mn pq r1 11 -> ZZ mn pq r
Unused codes would be:
ZZ ZZ xx x
ZZ xx ZZ x
xx ZZ ZZ x
UPD another suitable packing relations are 19b -> 11t (~0.1% overhead), 84b -> 53t (~0,0035% overhead), but is seems to be overshoot.