In Unity, a heightmap is internally shorted as an Int16 (note that only 0-32767 is used). I want send the heightmap to the GPU, and ideally only use 16-bits.
It seems like the best way to do that would be to encode the heightmap value to a RG16 render texture (since in Unity I can't choose a single channel 16-bit integer format), and pack/unpack as necessary.
Here is the heightmap texture I'm sending to the GPU (via a CPU-side C# script):
const int Size = 1024;
Color32[] colours = new Color32[Size * Size];
for (int y = 0; y < Size; y++) {
for (int x = 0; x < Size; x++) {
float height = heights[y, x]; // With Unity it's correct to flip the x/y axes
short s = (short)(heights[y, x] * short.MaxValue);
byte upper = (byte)(s >> 7); // Shift 7 and not 8 since we don't want values outside of 0-32767 anyway.
byte lower = (byte)(s & 255);
colours[y * size + x] = new Color32(upper, lower, 0, 0);
}
}
tex.SetPixels32(colours);
tex.Apply();
I am running into a confusing issue where there seems to be a large loss of precision when unpacking the heightmap sample. I've tried the bitwise way:
float HeightmapSample(float u, float v) {
fixed2 height = tex2Dlod(_Heightmap, float4(u, v, 0.f, 0.f)).rg;
int2 heightInt = height * 255.f;
int unpacked = (heightInt.r << 7) | heightInt.g;
return unpacked / 32767.f; // Renormalize
}
And a (possibly) funky floating point method:
float HeightmapSample(float u, float v) {
fixed2 height = tex2Dlod(_Heightmap, float4(u, v, 0.f, 0.f)).rg;
return height.r + (height.g / 128.f) - (1.f / 128.f);
}
Both methods don't produce even close to the same as the ground truth:
After almost 2 days of hunting down to culprit I realized I had a shader that was incorrectly re-encoding the heightmap every time my plugin loaded. I delibrately didn't fix that shader because I was focusing on the problem above instead, but it turned it to be the problem!