I'm currently doing some experiments with RealityKit.
I've been looking at some sample code, and I'm a bit confused about the differences between ARAnchor
and AnchorEntity
, and when to use one over the other.
So far I know that:
AnchorEntity
can also have other Entity
's as children, so you can add model objects directly to the anchor. You can't do this with ARAnchor
, you have to add model objects "manually" to the rootNode
, and use the position of the anchor the place it correctly.ARAnchor
to optimize the tracking in the area around the anchor. The documentation for AnchorEntity
does not specify this.Right now I add a AnchorEntity
to the session as a "root node", since it's simpler to use, so that I can simply add models as children directly to this anchor. But then I also add a ARAnchor
, located at the same position, to the scene's anchors, to enhance tracking around this point. Is this nececary?
Updated: November 26, 2023.
ARAnchor
and AnchorEntity
classes were both made for the same divine purpose – to tether 3D models to your real-world objects.
RealityKit AnchorEntity
greatly extends the capabilities of ARKit ARAnchor
. The most important difference between these two is that AnchorEntity
automatically tracks a real world target, but ARAnchor
needs session(...)
instance methods when you use RealityKit (or renderer(...)
instance methods if you use SceneKit) to accomplish it. Take into consideration that the collection of ARAnchors
is stored in the ARSession object and the collection of AnchorEntities
is stored in the Scene.
In addition, generating ARAchors requires a manual Session config, while generating AnchorEntities requires minimal developer's involvement.
Hierarchical differences of iOS AR scenes:
The main advantage of RealityKit is the ability to use different AnchorEntities at the same time, such as .plane
, .body
or .object
. There's automaticallyConfigureSession
instance property in RealityKit. When enabled, the ARView
automatically runs an ARSession
with a config that will get updated depending on your camera mode and scene anchors. When disabled, the session needs to be run manually with your own config.
arView.automaticallyConfigureSession = true // default
In ARKit, as you know, you can run just one config in the current session: World, Body, or Geo. There is an exception in ARKit, however - you can run two configs together - FaceTracking and WorldTracking (one of them has to be a driver, and the other one – driven).
let config = ARFaceTrackingConfiguration()
config.isWorldTrackingEnabled = true
arView.session.run(config)
Apple Developer documentation says:
In RealityKit framework you use an
AnchorEntity
instance as the root of an entity hierarchy, and add it to theanchors collection
for a Scene instance. This enables ARKit to place the anchor entity, along with all of its hierarchical descendants, into the real world. In addition to the components the anchor entity inherits from theEntity
class, the anchor entity also conforms to theHasAnchoring
protocol, giving it anAnchoringComponent
instance.
AnchorEntity
has three building blocks:
world
, body
or image
)All entities have Synchronization component
that helps organise collaborative sessions.
In RealityKit for iOS, AnchorEntity
object has the eleven specific anchor types for eleven different scenarios (but remember that in RealityKit for macOS you can only work with two types of anchors – AnchorEntity(world: .zero)
and AnchorEntity(.camera)
):
You're able to simultaneously use both objects ARAnchor
and AnchorEntity
in your app.
You can find more info on ARAnchor
and AnchorEntity
, in this THIS POST.