Search code examples
c++carchitecturestandardshardware

How adaptible are the C and C++ standards to a hypothetical ternary hardware architecture?


How easily could you program a ternary computer in C or C++?

Obviously, the standard logical operators (like &, | and ^) only make sense when using binary logic.

For integer values, the C standard refers to value ranges while the C++ standard mentions bit lengths (eg. long has to be at least 32bit long). How would that apply to a computer using trits (i.e. ternary bits) ?

Would it, in general, be practical to use a slightly modified version of C/C++ for programming on a ternary architecture, or should you design a new programming language from scratch?

Important points to consider would be backward compatibility (could binary-assuming programs be easily compiled for a ternary architecture, or would an emulation of binary data storage be necessary?) and assumptions implicit in the design of the C/C++ standards.


Solution

  • The wording of the C++ standard assumes a binary architecture:

    [intro.memory]/1:

    The fundamental storage unit in the C++ memory model is the byte. A byte is at least large enough to contain any member of the basic execution character set and the eight-bit code units of the Unicode UTF-8 encoding form and is composed of a contiguous sequence of bits, the number of which is implementation-defined.

    [basic/fundamental]/4:

    Unsigned integers shall obey the laws of arithmetic modulo 2 [raised to the power of] n where n is the number of bits in the value representation of that particular size of integer.

    Furthermore, bit-fields and padding bits are frequently used concepts.

    Operators like left-shift, right-shift are also referring to bits, and bitwise-and, bitwise-or and bitwise-xor are by definition operation that operate at bit level asuming that each bit is either true or false.

    What if the standard would be adapted to ternary architecture ?

    We could imagine that the standard could use another term to designate the smallest piece of information in the architecture, in a similar way than it was done for the byte (the byte although most often 8 bits is not defined as such in the standard, so that the standard could well work with machines having 10 bit bytes).

    Nevertheless the consequences would be terrible:

    • left-shift for example is assumed in many algorithms to multiply by a power of 2, and suddenly, it would multiply by a power of 3. Same for right-shift. So a lot of existing code would not work anymore.

    • bitwise operations are not defined for trits: they are only defined for binary bits. So the standard would have to redefine them in a way or another (for example by emulating the original behavior with some kind of power of 2 maths). Again, their are chances that some existing code gets broken, esepecially if used in combination with shifts.

    Additional remark

    Since the visionary book "Cybernetics" of Norbert Wiener published in 1948 (!!!) it makes no doubt anymore that alternatives to the binary sytems are nout. In the chapter "Computing machines and the nervous system" he explained very well why numerical machines are more accurate and performant than analog ones, and that among the numerical machines, the binary arithmetic outperformed the others because it was simpler, faster and in addition easier and cheaper to implement. For the time being, nobody achieved to demonstrate the contrary, so no ternary computer architecture in sight soon.

    Comments

    Peter points out in the comments that the implementation just has to offer the specified behavior of the abstract machine defined in the C++ standard. This is true according to [intro.abstract]/5. But my point is that this is only a theoretical truth in view of ternary machines.

    The binary model is such a strong assumption in so many places of the standard, and intertwined with the addressing scheme, that I will pretend that it is impossible to emulate in an efficient and consistent manner on a ternary machine.

    Just to illustrate the issue with the definition of bytes: it requires 6 trits to fit the requirements for a byte. However 6 trits corresponds to 9,5 bits. In order for a byte to correspond to a consecutive number of bits as required by the standard you'd need it to be s trits so that pow(3,s) == pow(2,n). This equation has no solutions. Alternatively you could say that a byte is 9 bits stored into 6 trits and that you just ignore some ternary values. But as bytes are used to store pointers, you'd also ignore some memory ranges. So you'd need a mapping function to convert between values stored in bytes and machine addresses. But what then with hardware alignment constraints ? These might not correspond to alignments that can be expressed according to the binary model, etc... In the end you would need to have a slow virtual machine that completely emulates by software a binary architecture (certainly with the same level of performance than the many MIPS emulators on the x86 architecture, so ok for educational purpose). I think that this could then comply with the standard, but no longer our performance expectations.