I need to perform fast Galois field arithmetic in my application. I have a multiplication function written in assembly that has been optimized for my platform, an MSP430 microcontroller. That function computes the product of two large numbers of arbitrary size, but each number must be represented as an array of 16-bit integers. However, in my project, a Galois field element is represented as an array of 16 64-bit integers. How do I convert my array of 16 64-bit integers into a representation needed by my optimized, assembly-based multiplication function (i.e. an array of 64 16-bit integers)? Of course, simply casting the array as a (UInt16 *) does not work.
The MSP430 is a little-endian architecture. Thanks in advance for any suggestions.
As mentioned by @JohnBollinger I was able to simply reinterpret the bytes of the array of uint64_t
as an array of uint16_t
by casting. For some reason, I was thinking the bytes had to be reordered somehow, but after testing I am getting the correct results. This didn't work for me initially because of other unrelated issues.