Search code examples
swiftarkitmetalrealitykitroomplan

LiDAR and RealityKit – Capture a Real World Texture for a Scanned Model


Task

I would like to capture a real-world texture and apply it to a reconstructed mesh produced with a help of LiDAR scanner. I suppose that Projection-View-Model matrices should be used for that. A texture must be made from fixed Point-of-View, for example, from center of a room. However, it would be an ideal solution if we could apply an environmentTexturing data, collected as a cube-map texture in a scene.

enter image description here

Look at 3D Scanner App. It's a reference app allowing us to export a model with its texture.

I need to capture a texture with one iteration. I do not need to update it in a realtime. I realize that changing PoV leads to a wrong texture's perception, in other words, distortion of a texture. Also I realize that there's a dynamic tesselation in RealityKit and there's an automatic texture mipmapping (texture's resolution depends on a distance it captured from).

import RealityKit
import ARKit
import Metal
import ModelIO

class ViewController: UIViewController, ARSessionDelegate {
    
    @IBOutlet var arView: ARView!

    override func viewDidLoad() {
        super.viewDidLoad()

        arView.session.delegate = self
        arView.debugOptions.insert(.showSceneUnderstanding)

        let config = ARWorldTrackingConfiguration()
        config.sceneReconstruction = .mesh
        config.environmentTexturing = .automatic
        arView.session.run(config)
    }
}

Question

  • How to capture and apply a real world texture to a reconstructed 3D mesh?



Solution

  • Object Reconstruction

    10 October 2023, Apple released iOS Reality Composer 1.6 app that is capable of capturing a real world model's mesh with texture in realtime using the LiDAR scanning process. But at the moment there's still no native programmatic API for that (but we are all looking forward to it).

    Also, there's a methodology that allows developers to create textured models from series of shots.

    Photogrammetry

    Object Capture API, announced at WWDC 2021, provides developers with the long-awaited photogrammetry tool. At the output we get USDZ model with UV-mapped hi-rez texture. To implement Object Capture API you need macOS 12+ and Xcode 13+.

    enter image description here

    To create a USDZ model from a series of shots, submit all taken images to RealityKit's PhotogrammetrySession.

    Here's a code snippet that spills a light on this process:

    import RealityKit
    import Combine
    
    let pathToImages = URL(fileURLWithPath: "/path/to/my/images/")
    
    let url = URL(fileURLWithPath: "model.usdz")
    
    var request = PhotogrammetrySession.Request.modelFile(url: url, 
                                                       detail: .medium)
    
    var configuration = PhotogrammetrySession.Configuration()
    configuration.sampleOverlap = .normal
    configuration.sampleOrdering = .unordered
    configuration.featureSensitivity = .normal
    configuration.isObjectMaskingEnabled = false
    
    guard let session = try PhotogrammetrySession(input: pathToImages, 
                                          configuration: configuration)
    else { return 
} 
    
    var subscriptions = Set<AnyCancellable>()
    
    session.output.receive(on: DispatchQueue.global())
                  .sink(receiveCompletion: { _ in
                      // errors
                  }, receiveValue: { _ in
                      // output
                  }) 
                  .store(in: &subscriptions)
    
    session.process(requests: [request])
    

    You can reconstruct USD and OBJ models with their corresponding UV-mapped textures.