I'm developing an AR app for iOS that lets the user place a model in the physical world, using ARKit and SceneKit.
I've looked at this Apple sample code for inspiration. In the sample project, they use tracked raycasts to position 3D models in a scene. This is close to what I want, and what led me to assume I need to do the same to achieve the most accurate positioning.
However, when I use a tracked raycast to position my model, the model drifts around the scene a lot as ARKit updates the position of the raycast.
I get much more stable positioning when using a non-tracked raycast. That makes me ask: what actually is the intended use case for a tracked raycast? Am I understanding this API wrong?
I've tried:
I also understand what an AR raycast in general is for: getting the intersection of a 2D point on the screen with the 3D geometries that ARKit is tracking. As this post has explained already.
In Apple's example app you mentioned, raycasting is used to update the FocusSquare
all the time. You don't really need it for placing your model. You can get a certain (real-world) position (using the FocusSquare
) to place a model on that exact location. For this you can fetch static positon data from the FocusSquare
at the moment you add your model on scene. I hope I understood corectly what you want.