Search code examples
swiftopengl-esscenekitvuforiametal

How to combine SCNRenderer with an existing MTLCommandBuffer?


I successfully integrated the Vuforia SDK Image Target Tracking feature into an iOS project by combining the OpenGL context (EAGLContext) that the SDK provides, with an instance of SceneKit's SCNRenderer. That allowed me to leverage the simplicity of the SceneKit's 3D API and at the same time benefiting from Vuforia's high precision image detection. Now, I'd like to do the same by replacing OpenGL with Metal.

Some background story

I was able to draw SceneKit objects on top of the live video texture drawn by Vuforia using OpenGL without major problems.

Here's the simplified setup I used with OpenGL:

func configureRenderer(for context: EAGLContext) {
    self.renderer = SCNRenderer(context: context, options: nil)
    self.scene = SCNScene()
    renderer.scene = scene

    // other scenekit setup
}

func render() {
    // manipulate scenekit nodes

    renderer.render(atTime: CFAbsoluteTimeGetCurrent())
}

Apple deprecates OpenGL on iOS 12

Since Apple announced that it is deprecating OpenGL on iOS 12, I figured it would be a good idea to try to migrate this project to use the Metal instead of OpenGL.

That should be simple in theory as Vuforia supports Metal out of the box. However, when trying to integrate it, I hit the wall.

The question

The view seems to ever only render results of the SceneKit renderer, or the textures encoded by Vuforia, but never both at the same time. It depends what is encoded first. What do I have to do to blend both results togeter?

Here's the problematic setup in a nutshell:

func configureRenderer(for device: MTLDevice) {
    let renderer = SCNRenderer(device: device, options: nil)
    self.scene = SCNScene()
    renderer.scene = scene

    // other scenekit setup
}

func render(viewport: CGRect, commandBuffer: MTLCommandBuffer, drawable: CAMetalDrawable) {
    // manipulate scenekit nodes

    let renderPassDescriptor = MTLRenderPassDescriptor()
    renderPassDescriptor.colorAttachments[0].texture = drawable.texture
    renderPassDescriptor.colorAttachments[0].loadAction = .load
    renderPassDescriptor.colorAttachments[0].storeAction = .store
    renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0, blue: 0, alpha: 0)

    renderer!.render(withViewport: viewport, commandBuffer: commandBuffer, passDescriptor: renderPassDescriptor)
}

I tried calling render either after encoder.endEncoding or before the commandBuffer.renderCommandEncoderWithDescriptor :

metalDevice = MTLCreateSystemDefaultDevice();
metalCommandQueue = [metalDevice newCommandQueue];
id<MTLCommandBuffer>commandBuffer = [metalCommandQueue commandBuffer];

//// -----> call the `render(viewport:commandBuffer:drawable) here <------- \\\\

id<MTLRenderCommandEncoder> encoder = [commandBuffer renderCommandEncoderWithDescriptor:renderPassDescriptor];

// calls to encoder to render textures from Vuforia

[encoder endEncoding];

//// -----> or here <------- \\\\

[commandBuffer presentDrawable:drawable];
[commandBuffer commit];

In either case, I only see results of SCNRenderer OR results of the encoder, but never both in the same view.

It seems to me as if the encoding pass above, and the SCNRenderer.render, are overwriting each other's buffers.

What am I missing here?


Solution

  • I think I've found an answer. I am rendering scnrenderer after endEncoding, but I'm creating a new descriptor.

        // Pass Metal context data to Vuforia Engine (we may have changed the encoder since
        // calling Vuforia::Renderer::begin)
        finishRender(UnsafeMutableRawPointer(Unmanaged.passRetained(drawable!.texture).toOpaque()), UnsafeMutableRawPointer(Unmanaged.passRetained(encoder!).toOpaque()))
        
        // ========== Finish Metal rendering ==========
        encoder?.endEncoding()
        
        // Commit the rendering commands
        // Command completed handler
        commandBuffer?.addCompletedHandler { _ in self.mCommandExecutingSemaphore.signal()}
        let screenSize = UIScreen.main.bounds.size
        let newDescriptor = MTLRenderPassDescriptor()
        
        // Draw to the drawable's texture
        newDescriptor.colorAttachments[0].texture = drawable?.texture
    
        // Store the data in the texture when rendering is complete
        newDescriptor.colorAttachments[0].storeAction = MTLStoreAction.store
        // Use textureDepth for depth operations.
        newDescriptor.depthAttachment.texture = mDepthTexture;
        renderer?.render(atTime: 0, viewport: CGRect(x: 0, y: 0, width: screenSize.width, height: screenSize.height), commandBuffer: commandBuffer!, passDescriptor: newDescriptor)
        
        // Present the drawable when the command buffer has been executed (Metal
        // calls to CoreAnimation to tell it to put the texture on the display when
        // the rendering is complete)
        commandBuffer?.present(drawable!)
        
        // Commit the command buffer for execution as soon as possible
        commandBuffer?.commit()