I'm converting some OpenCL code to DirectCompute and need to process 8-bit character strings in a compute shader but don't find an HLSL data type for "byte" or "char". OpenCL supports a "char" type, so I was expecting an equivalent. What is the best way to define and access the data?
It seems that the data can be passed by treating it as a series of "uint" types and unpacking it with bit-shifting, AND-ing, etc. but this seems like it will cause unnecessary overhead. What is the correct way?
I've found two ways to do this, although they both require working with int/uint values in the HLSL since I haven't found an 8-bit data type:
Option 1 is to let the "view" handle the translation:
Buffer<uint>
Option 2 is to treat each 4-byte sequence as a uint, using the format DXGI_FORMAT_R32_UINT, and manually extract each character using something like this:
Buffer<uint> buffer;
uint offset = ...;
uint ch1, ch2, ch3, ch4;
ch1 = buffer[offset] >> 24;
ch2 = (buffer[offset] & 0x00ff0000) >> 16;
ch3 = (buffer[offset] & 0x0000ff00) >> 8;
ch4 = (buffer[offset] & 0x000000ff);
Either way you end up working with 32-bit values but at least they correspond to individual characters.