Search code examples
juliapoint-cloudssemantic-segmentationunet-neural-network

PointCloud to image fed to UNet: How to map predicted image back to 3D PointCloud?


I have a .las file and I performed the following operations:

  1. Convert PointCloud to RGB Image
  2. Convert PointCloud to GroundTruth Matrix.
  3. Crop Images and corresponding GroundTruth Matrix to fixed size 256x256
  4. Train UNet (image and groundtuth label)
  5. Inference. Get prediction Matrix with each pixel representing Labels

So I've a predicted matrix, I don't know how to map it to PointCloud to see how 3D predicted classification looks like? I'm using Julia


Solution

  • I;ve used binning which was used to project 3D to 2D again from 2D to 3D.