I'm using unity, and I wanna send a byte array to the GPU using HLSL. I know about the ByteAddressBuffer, but I have no idea how to use it. I kinda just want to know how to send "byte"s to the GPU. I want to have a StructuredBuffer<byte>
in my compute shader.
For the shader part, you can use a StructuredBuffer
. I don't know exactly if there is the byte data type in HLSL, so I will just use integers for this example.
Shader code:
Shader "Name" {
SubShader {
...
StructuredBuffer<int> _Data;
...
}
}
On the C# side, you have a Material that corresponds to your shader, lets call it mat
, and your byte array bArr
. Additionally you have to create a gpu buffer, that you can then bind to your shader: ComputeBuffer dataBuf = new ComputeBuffer(bArr.Length, sizeof(int))
.
Finally, load your array onto the gpu dataBuf.SetData(bArr)
and bind the buffer to your shader mat.SetBuffer("_Data", dataBuf);
Edit
I want to have a StructuredBuffer<byte> in my compute shader.
From what I've read, you can't. There is no byte
data type in HLSL (nor CG, which is what unity uses). The example above is a standard vertex/fragment shader, for using compute shaders I would refer you to my answer on your other question. Augment it to your needs. As I have already written in a comment, if you do not want to use int
for your byte
data and thus waste 24 bits, you can punch 4 bytes
into 1 int
with bit shifting. The shifting operation should be available in shaders when using shader model above 4 (DX10.1 or something)
An example of how to do this is as follows:
//encoding on the cpu
int myInt = 0;
myInt += (int)myByte1;
myInt += (int)(myByte2 << 8);
myInt += (int)(myByte3 << 16);
myInt += (int)(myByte4 << 24);
//decoding on the gpu
myByte1 = myInt & 0xFF;
myByte2 = (myInt >> 8) & 0xFF;
myByte3 = (myInt >> 16) & 0xFF;
myByte4 = (myInt >> 24) & 0xFF;