Search code examples
graphicstexturesrenderingmetal

How to use different fragment shaders in one Metal API scene?


I've been doing some experiments with Apple's Metal API recently and now I came to the subject question - How to use different fragment shaders in one Metal API scene? Is it possible?

The background: a whole geometric primitive is rendered by a simple vertex-fragment chain with the colors defined/calculated inside (let's say we have a cube and all it's faces are rendered with described method). Next, a part of the primitive needs to be rendered additionally with a texture (adding some picture to only one of the faces).

Do we need to use different fragment shaders to accomplish that? I guess it's possible to use some default texture on the first step and that will give some solution.

What would you recommend?

//============= Edited part goes further ==========//

I tried to use two different objects of MTLRenderPipelineState with two different pairs of rendering functions, as Warren suggested. Having the following code I don't get the desired result. Each of the states is rendered as they're supposed when it's done separately, but doing it together only gives us the first one being rendered.

creation:

id <MTLFunction> fragmentProgram = [_defaultLibrary newFunctionWithName:@"color_fragment"];

// Load the vertex program into the library
id <MTLFunction> vertexProgram = [_defaultLibrary newFunctionWithName:@"lighting_vertex"];

// Create a vertex descriptor from the MTKMesh
MTLVertexDescriptor *vertexDescriptor = MTKMetalVertexDescriptorFromModelIO(_boxMesh.vertexDescriptor);
vertexDescriptor.layouts[0].stepRate = 1;
vertexDescriptor.layouts[0].stepFunction = MTLVertexStepFunctionPerVertex;

// Create a reusable pipeline state
MTLRenderPipelineDescriptor *pipelineStateDescriptor = [[MTLRenderPipelineDescriptor alloc] init];
pipelineStateDescriptor.label = @"MyPipeline";
pipelineStateDescriptor.sampleCount = _view.sampleCount;
pipelineStateDescriptor.vertexFunction = vertexProgram;
pipelineStateDescriptor.fragmentFunction = fragmentProgram;
pipelineStateDescriptor.vertexDescriptor = vertexDescriptor;
pipelineStateDescriptor.colorAttachments[0].pixelFormat = _view.colorPixelFormat;
pipelineStateDescriptor.depthAttachmentPixelFormat = _view.depthStencilPixelFormat;
pipelineStateDescriptor.stencilAttachmentPixelFormat = _view.depthStencilPixelFormat;

NSError *error = NULL;
_pipelineStateColor = [_device newRenderPipelineStateWithDescriptor:pipelineStateDescriptor error:&error];
if (!_pipelineStateColor) {
    NSLog(@"Failed to created pipeline state, error %@", error);
}

pipelineStateDescriptor.fragmentFunction = [_defaultLibrary newFunctionWithName:@"lighting_fragment"];
_pipelineStateTexture = [_device newRenderPipelineStateWithDescriptor:pipelineStateDescriptor error:&error];
if (!_pipelineStateTexture) {
    NSLog(@"Failed to created pipeline state, error %@", error);
}

rendering:

 - (void)renderInto:(id <MTLRenderCommandEncoder>)renderEncoder
 WithPipelineState:(id<MTLRenderPipelineState>)pipelineState
{
    [renderEncoder setRenderPipelineState:pipelineState];
    [renderEncoder setVertexBuffer:_boxMesh.vertexBuffers[0].buffer offset:_boxMesh.vertexBuffers[0].offset atIndex:0 ];
    [renderEncoder setVertexBuffer:_dynamicConstantBuffer offset:(sizeof(uniforms_t) * _constantDataBufferIndex) atIndex:1 ];
    [renderEncoder setVertexBuffer:_textureBuffer offset:0 atIndex:2];
    [renderEncoder setFragmentTexture:_textureData atIndex:0];

    MTKSubmesh* submesh = _boxMesh.submeshes[0];

    [renderEncoder drawIndexedPrimitives:submesh.primitiveType
                              indexCount:submesh.indexCount
                               indexType:submesh.indexType
                             indexBuffer:submesh.indexBuffer.buffer
                       indexBufferOffset:submesh.indexBuffer.offset];
}

- (void)_render
{
    dispatch_semaphore_wait(_inflight_semaphore, DISPATCH_TIME_FOREVER);

    [self _update];

    id <MTLCommandBuffer> commandBuffer = [_commandQueue commandBuffer];

    __block dispatch_semaphore_t block_sema = _inflight_semaphore;
    [commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> buffer) {
        dispatch_semaphore_signal(block_sema);
    }];

    MTLRenderPassDescriptor* renderPassDescriptor = _view.currentRenderPassDescriptor;

    if(renderPassDescriptor != nil)
    {
        id <MTLRenderCommandEncoder> renderEncoder = [commandBuffer renderCommandEncoderWithDescriptor:renderPassDescriptor];

        renderEncoder.label = @"MyRenderEncoder";

        [renderEncoder setDepthStencilState:_depthState];

        [self renderInto:renderEncoder WithPipelineState:_pipelineStateColor];

        [self renderInto:renderEncoder WithPipelineState:_pipelineStateTexture];

        [renderEncoder endEncoding];

        [commandBuffer presentDrawable:_view.currentDrawable];
    }

    _constantDataBufferIndex = (_constantDataBufferIndex + 1) % kMaxInflightBuffers;

    [commandBuffer commit];
}

and finally the fragment shaders:

fragment float4 color_fragment(ColorInOut  in [[stage_in]])
{
    return float4(0.8f, 0.f, 0.1f, 0.5f);
}

fragment float4 texture_fragment(ColorInOut                       in [[stage_in]],
                                 texture2d<float, access::sample> texture [[texture(0)]])
{
    constexpr sampler s(coord::normalized,
                        address::clamp_to_zero,
                        filter::linear);

    return texture.sample(s, in.texture_coordinate);
}

Solution

  • You can use multiple fragment shaders in a single frame/pass by creating multiple render pipeline states. Simply create a pipeline state for each vertex/fragment function pair, and call setRenderPipelineState: on your render command encoder to set the appropriate pipeline state before issuing the draw call. You will need to write separate fragment shader functions for doing the color passthrough and the texture sampling.