Search code examples
iosswiftcoremlsemantic-segmentationapple-vision

How can I resize my MLMultiArray to fit my camera texture size?


I am doing some semantic segmentation of people using DeepLabV3 mlmodel. The output after prediction is 513*513 as an MLMultiArray. Currently I am resizing my camera output to this size to apply the segmented array.

How can I resize the MLMultiArray to match my camera texture size?

if let observations = request.results as? [VNCoreMLFeatureValueObservation],
        let segmentationmap = observations.first?.featureValue.multiArrayValue {
        
        // row - 513, col - 513
        guard let row = segmentationmap.shape[0] as? Int,
            let col = segmentationmap.shape[1] as? Int else {
                return
        }
}

Solution

  • Here's some demo code for using DeepLab V3 in an iOS app: https://github.com/hollance/SemanticSegmentationMetalDemo