Given an MTLTexture
, defined as follows.
// Create device.
id<MTLDevice> dev = MTLCreateDefaultSystemDevice();
// Size of texture.
const unsigned int W = 640;
const unsigned int H = 480;
// Define texture.
MTLTextureDescriptor *desc = [[MTLTextureDescriptor alloc] init];
desc.pixelFormat = MTLPixelFormatBGRA8Unorm;
desc.width = W;
desc.height = H;
// Create texture.
id<MTLTexture> tex = [device newTextureWithDescriptor:desc];
It is my understanding that at this point I should have a texture as defined in desc
allocated on device dev
and accessible through tex
.
Now, given another texture tex2
(known to be allocated and accessible) and a Metal compute kernel defined as follows.
kernel void foo(texture2d<float, access::read> in [[texture(0)]],
texture2d<float, access::write> out [[texture(1)]],
uint2 pix [[thread_position_in_grid]]) {
// Out of bounds check.
if (pix.x >= out.get_width() || pix.y >= out.get_height()) {
return;
}
// Do some processing on the input texture.
// ... All fine up to here.
// Write out a pixel to the output buffer.
const float4 p = abc; // abc is computed above.
out.write(p, pix);
}
It is my understanding that when the pixel p
is written out to out
, the values of p
will be converted to conform to the pixel format of tex
, in this case MTLPixelFormatBGRA8Unorm
.
However, when launching the kernel as follows, the line in which p
is written to out
(above defined as tex
) triggers a critical error (SIGABRT
).
// Create a Metal library.
id<MTLLibrary> lib = [dev newDefaultLibrary];
// Load the kernel.
id<MTLFunction> kernel = [lib newFunctionWithName:@"foo"];
// Create a pipeline state.
id<MTLComputePipelineState> pipelineState = [dev newComputePipelineStateWithFunction:kernel error:NULL];
// Create a command queue.
id<MTLCommandQueue> cmdQueue = [dev newCommandQueue];
// Create command buffer.
id<MTLCommandBuffer> cmdBuff = [cmdQueue commandBuffer];
// Create compute encoder.
id<MTLComputeCommandEncoder> enc = [cmdBuff computeCommandEncoder];
// Set the pipeline state.
[enc setComputePipelineState:pipelineState];
// Set the input textures (tex2 is read only in the kernel, as above).
[enc setTexture:tex2 atIndex:0];
[enc setTexture:tex atIndex:1];
// 2D launch configuration.
const MTLSize groupDim = MTLSizeMake(16, 16, 1);
const MTLSize gridDim = MTLSizeMake((int)ceil((float)(W / (float)groupDim.width)),
(int)ceil((float)(H / (float)groupDim.height)),
1);
// Launch kernel.
[enc dispatchThreadgroups:gridDim threadsPerThreadgroup:groupDim];
[enc endEncoding];
[enc commit];
[cmdBuff waitUntilCompleted];
My question is that under the scenario outlined above, is my understanding of how one allocates a MTLTexture
correct? Or, is the example above merely defining a wrapper around some texture that I need to separately allocate?
The above texture allocation and compute kernel launch are correct. Upon further digging in the documentation, the part that was missing was the usage
property of MTLTextureDescriptor
. In the documentation, the following is stated.
The default value for this property is MTLTextureUsageShaderRead.
As such, in the example given in the question, the following additional property assignment on MTLTextureDescriptor
is required.
desc.usage = MTLTextureUsageShaderWrite;