I am working in C#, with Unity. I have created a compute shader to which I want to pass my array of C# data structures. That's when I realized I might have gone overboard with fancy data structures, instead of sticking to the C-like structures of shaders.
More specifically, I'm hitting two road blocks:
1. Each of the objects I want to pass to the shader's buffer contains a linked list, which means the size varies between objects. Can the shader buffer be defined as anything else than an array with its size passed explicitly?
2. Each of the objects I want to pass to the shader is polymorphic. i.e. they all have a field "type", but then if type==0 they have a field value0, while the ones with type==1 have a field value1. Is it reasonable to try and achieve that in shaders language (with a C-like union maybe)?
interface IObject {
public int type { get; }
}
class Type0 : IObject {
public int type => 0;
public int field0;
}
class Type1 : IObject {
public int type => 1;
public float field1;
}
class ItemForShader {
public List<IObject> objects { get; set; }
}
...
// Can I pass this to the shader?
var shaderParameters = new List<ItemForShader>() {
new ItemForShader() {
objects = new() { new Type0(), new Type1() }
},
new ItemForShader() {
objects = new() { new Type1(), new Type0(), new Type0() }
}
}
Please note : I don't need you to detail any solution at length (I can already see you describe how to pass a shared pool of linked-list nodes and then share those nodes between all the objects with a fancy indexing system).
I simply want to know what's reasonable and how it's usually done (or not done). Maybe describe in essence what C structure matches what C# structure in that scenario?
EDIT: As I feared, this does not bode well : GLSL array uniform with a dynamic length
Make multiple shaders, one for each number of potential items in the linked lists you want to support. GLSL does not support variable sized arrays. The only other solution is to create arrays with the largest size you'll ever use.
The literal answer to your question is: You can't.
GPU code cannot allocate memory. All memory allocations are figured out at shader compile time. You can't have an array of N elements. You can't have polymorphic data types.
But you can have blocks of data with predetermined size, and build those concepts out of it. In this case, because int and float are the same size, you can have a tightly packed union struct. In general, you just have to figure out the largest size and have all objects use that size. Here's an example of how to build the union struct:
C#:
[StructLayout(LayoutKind.Explicit)]
public struct IObject {
[FieldOffset(0)]
public uint type;
[FieldOffset(1)]
public int field0;
[FieldOffset(1)]
public float field1;
public static IObject Type0(int field0) {
return new IObject() {
type = 0,
field0 = field0,
};
}
public static IObject Type1(float field1) {
return new IObject() {
type = 1,
field1 = field1,
};
}
}
HLSL:
struct IObject {
uint type;
int unionMember;
};
int GetField0(IObject o) {
return o.unionMember;
}
float GetField1(IObject o) {
return asfloat(o.unionMember);
}
Note that "asfloat" is the HLSL equivalent of the C-style reinterpret-cast. It is not returning the float that is the same number as the int, but the float that is those bytes interpreted as a float.
I will also point out that GPU shader cores are not efficient at traversing complex data structures. Take a look at this question for a general idea of why.