I wrote a simple code as follows, to check whether GPU could do some computational works.
id<MTLDevice> device = MTLCreateSystemDefaultDevice();
NSLog(@"Device: %@", [device name]);
id<MTLCommandQueue> commandQueue = [device newCommandQueue];
NSError * ns_error = nil;
id<MTLLibrary>defaultLibrary = [device newLibraryWithFile:@"/Users/i/tmp/tmp6/s.metallib" error:&ns_error];
// Buffer for storing encoded commands that are sent to GPU
id<MTLCommandBuffer> commandBuffer = [commandQueue commandBuffer];
// Encoder for GPU commands
id <MTLComputeCommandEncoder> computeCommandEncoder = [commandBuffer computeCommandEncoder];
//set input and output data
float tmpbuf[1000];
float outbuf[1000];
for( int i = 0; i < 1000; i++ )
{
tmpbuf[i] = i;
outbuf[i] = 0;
}
int tmp_length = 100*sizeof(float);
id<MTLBuffer> inVectorBuffer = [device newBufferWithBytes: tmpbuf length: tmp_length options: MTLResourceOptionCPUCacheModeDefault ];
[computeCommandEncoder setBuffer: inVectorBuffer offset: 0 atIndex: 0 ];
id<MTLBuffer> outVectorBuffer = [device newBufferWithBytes: outbuf length: tmp_length options: MTLResourceOptionCPUCacheModeDefault ];
[computeCommandEncoder setBuffer: outVectorBuffer offset: 0 atIndex: 1 ];
//get fuction
id<MTLFunction> newfunc = [ defaultLibrary newFunctionWithName:@"sigmoid" ];
//get pipelinestat
id<MTLComputePipelineState> cpipeline = [device newComputePipelineStateWithFunction: newfunc error:&ns_error ];
[computeCommandEncoder setComputePipelineState:cpipeline ];
//
MTLSize ts= {10, 10, 1};
MTLSize numThreadgroups = {2, 5, 1};
[computeCommandEncoder dispatchThreadgroups:numThreadgroups threadsPerThreadgroup:ts];
[ computeCommandEncoder endEncoding ];
[ commandBuffer commit];
//get data computed by GPU
NSData* outdata = [NSData dataWithBytesNoCopy:[outVectorBuffer contents ] length: tmp_length freeWhenDone:false ];
float final_out[1000];
[outdata getBytes:final_out length:tmp_length];
//In my option, each value of final_out should be 0
for( int i = 0; i < 1000; i++ )
{
printf("%.2f : %.2f\n", tmpbuf[i], final_out[i]);
}
The shader file, name s.shader, is as follows, which assign output with value 10.0
using namespace metal;
kernel void sigmoid(const device float *inVector [[ buffer(0) ]],
device float *outVector [[ buffer(1) ]],
uint id [[ thread_position_in_grid ]]) {
// This calculates sigmoid for _one_ position (=id) in a vector per call on the GPU
outVector[id] = 10.0;
}
In the above codes, I got data computed by GPU by variable final_out. In my option, each value of final_out should be 10.0, as presented in s.shader. However, all values of final_out is 0. Any problem in getting back data from GPU? Thanks.
Committing a command buffer simply tells the driver to start executing it. If you want read back the results of a GPU operation on the CPU, you either need to block the current thread with -waitUntilCompleted
or add a block to be called when the command buffer completes with the -addCompletedHandler:
method.
One other note: it looks like you're using buffers with a storage mode of Shared
. If you were ever to use buffers with a storage mode of Managed
, you'd also need to create a blit command encoder and call synchronizeResource:
with the appropriate buffer(s), then wait for it to complete as described above, in order to copy back the results from the GPU.