I would like to grab the raw feature points found by ARkit
, this API exposes the sparse point cloud: https://developer.apple.com/documentation/arkit/arframe/2887449-rawfeaturepoints, and outputs it as a list of vector_float3
. I would also like to have the vector representation of the feature point at each one of these point cloud coordinates, for example it could be SIFT,SURF, BRIEF, or anything that ARkit
uses internally. I could just grab the image from the video feed and run feature detectors on the image, but there's no guarantee that the feature points will be at the same 3D coordinate as the points in the sparse pointcloud!
Apple doesn't expose the internal mechanism for locating / generating feature points. This is typical for Apple API — they tend to expose results without disclosing any details of the underlying algorithms, as this frees them to change such algorithms in the future, automatically improving the experience for all shipping products that use the API.
In fact, Apple cites that very reason and implies that one should avoid analytic use of the point cloud... right there on the documentation page you linked:
ARKit does not guarantee that the number and arrangement of raw feature points will remain stable between software releases, or even between subsequent frames in the same session.