I have an array of float values that represents a 2D image (think from a CCD) that I ultimately want to render into a MTLView
. This is on macOS, but I'd like to be able to apply the same to iOS at some point. I initially create an MTLBuffer
with the data:
NSData *floatData = ...;
id<MTLBuffer> metalBuffer = [device newBufferWithBytes:floatData.bytes
length:floatData.length
options:MTLResourceCPUCacheModeDefaultCache | MTLResourceStorageModeManaged];
From here, I run the buffer through a few compute pipelines. Next, I want to create an RGB MTLTexture
object to pass to a few CIFilter
/MPS filters and then display. It seems to make sense to create a texture that uses the already created buffer as backing to avoid making another copy. (I've successfully used textures with a pixel format of MTLPixelFormatR32Float
.)
// create texture with scaled buffer - this is a wrapper, i.e. it shares memory with the buffer
MTLTextureDescriptor *desc;
desc = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatR32Float
width:width
height:height
mipmapped:NO];
desc.usage = MTLResourceUsageRead;
desc.storageMode = scaledBuffer.storageMode; // must match buffer
id<MTLTexture> scaledTexture = [scaledBuffer newTextureWithDescriptor:desc
offset:0
bytesPerRow:imageWidth * sizeof(float)];
The image dimensions are 242x242. When I run this I get:
validateNewTexture:89: failed assertion `BytesPerRow of a buffer-backed
texture with pixelFormat(MTLPixelFormatR32Float) must be aligned to 256 bytes,
found bytesPerRow(968)'
I know I need to use:
NSUInteger alignmentBytes = [self.device minimumLinearTextureAlignmentForPixelFormat:MTLPixelFormatR32Float];
How do I define the buffer such that the bytes are properly aligned?
More generally, is this the appropriate approach for this kind of data? This is the stage where I effectively convert the float data into something that has color. To clarify, this is my next step:
// render into RGB texture
MPSImageConversion *imageConversion = [[MPSImageConversion alloc] initWithDevice:self.device
srcAlpha:MPSAlphaTypeAlphaIsOne
destAlpha:MPSAlphaTypeAlphaIsOne
backgroundColor:nil
conversionInfo:NULL];
[imageConversion encodeToCommandBuffer:commandBuffer
sourceImage:scaledTexture
destinationImage:intermediateRGBTexture];
where intermediateRGBTexture
is a 2D texture defined with MTLPixelFormatRGBA16Float
to take advantage of EDR.
If it's important to you that the texture share the same backing memory as the buffer, and you want the texture to reflect the actual image dimensions, you need to ensure that the data in the buffer is correctly aligned from the start.
Rather than copying the source data all at once, you need to ensure the buffer has room for all of the aligned data, then copy it one row at a time.
NSUInteger rowAlignment = [self.device minimumLinearTextureAlignmentForPixelFormat:MTLPixelFormatR32Float];
NSUInteger sourceBytesPerRow = imageWidth * sizeof(float);
NSUInteger bytesPerRow = AlignUp(sourceBytesPerRow, rowAlignment);
id<MTLBuffer> metalBuffer = [self.device newBufferWithLength:bytesPerRow * imageHeight
options:MTLResourceCPUCacheModeDefaultCache];
const uint8_t *sourceData = floatData.bytes;
uint8_t *bufferData = metalBuffer.contents;
for (int i = 0; i < imageHeight; ++i) {
memcpy(bufferData + (i * bytesPerRow), sourceData + (i * sourceBytesPerRow), sourceBytesPerRow);
}
Where AlignUp
is your alignment function or macro of choice. Something like this:
static inline NSUInteger AlignUp(NSUInteger n, NSInteger alignment) {
return ((n + alignment - 1) / alignment) * alignment;
}
It's up to you to determine whether the added complexity is worth saving a copy, but this is one way to achieve what you want.