Overview
I have been working on a compute shader for awhile now. Initially it would take in 1 camera feed of 320x256 pixels. This all works great. Now I want to be able to handle multiple cameras, after weighing a lot of options and trying things I think the best way to move forward is to use a texture2darray. When implementing this, I cannot seem to read back the correct data, it always loads the first texture, while I want the texture per camera to load. I setup the texture2darray where the depth/volume is the camera index.
Issue
How would I request the camera texture per index from the compute shader? Requesting the whole array is completely fine as I need that anyways, and the compute shader does not need to be completely done yet either.
Error: UnityException: LoadRawTextureData: not enough data provided (will result in overread). I currently am struggling with this error, and I have tried so many things to fix it, currently in the code snippet I use getdata and this returns half of the values than I expect, with ushort, I get exactly the amount but it still gives the same error.
While I would be happy with fixing the issue and it works, please tell me if there are better ways to go about certain things!
More details
Since its a lot of setup and code I will describe a small flow here: The camera feed is 256x320 in 1 colour (so greyscale), in R16 format, the texture of this feed will be applied to a render texture format R16, with volume of the amount of cameras, and the dimension is tex2darray (and RW is enabled). This array is also created for the output, but without applying the textures. Then all data is send to the compute shader.
I Following code I use to get the texture output of the compute shader
int index2 = 0;
foreach (KeyValuePair<GigECamera, RawImage> cam in cameras)
{
AsyncGPUReadback.Request(cameraOutputTexturesArray, index2, TextureFormat.R16, result => {
if (result.hasError)
{
Debug.LogError("GPU readback error");
return;
}
RenderTexture.active = cameraOutputTexturesArray;
Texture2D extractedTexture = new Texture2D(cam.Key.VideoTexture.width, cam.Key.VideoTexture.height);
// Read pixels from the combined texture into the extracted texture
extractedTexture.LoadRawTextureData(result.GetData<float>());
// Apply changes to the extracted texture
extractedTexture.Apply();
// Assign it to the camera
cam.Value.texture = extractedTexture;
index2++;
}
}
Partial answer.
Part of your problem is that you're using mipmaps instead of layers in your texture 2d array. Using layers fixes one problem and that might be the main problem you have.
Anyway, the below code with trivial compute shader and using the usual rgba32 space works as expected for me.
#pragma kernel CSMain
RWTexture2DArray<float4> Result;
[numthreads(8,8,8)]
void CSMain (uint3 id : SV_DispatchThreadID)
{
Result[id.xyz] = float4(id.x/8.,id.y/8.,id.z/8.,1);
}
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Rendering;
public class MyCompute : MonoBehaviour
{
[SerializeField] ComputeShader shader;
void Start()
{
int kid = shader.FindKernel("CSMain");
RenderTexture cameraOutputTexturesArray = new RenderTexture(8,8,0){
dimension=TextureDimension.Tex2DArray,
volumeDepth=8,
enableRandomWrite=true,
};
shader.SetTexture(kid, "Result", cameraOutputTexturesArray);
shader.Dispatch(kid, 1,1,1);
AsyncGPUReadback.Request(cameraOutputTexturesArray, 0, TextureFormat.RGBA32, result =>
{
if (result.hasError)
{
Debug.LogError("GPU readback error");
return;
}
for (int i = 0; i < 8; i++)
{
Texture2D tex = new Texture2D(8,8, TextureFormat.RGBA32, false);
tex.LoadRawTextureData(result.GetData<float>(i));
tex.Apply();
Debug.Log($"camera {i} pixel @ (1,2) should be (0.125, 0.250, {i/8.0}, 1.000). actually is: {tex.GetPixel(1, 2)}");
}
});
}
}