Search code examples
androidaugmented-realityarcoresceneform

Is there a way to use ViewRenderables with AugmentedFaceNodes in ArCore Face Tracking?


I am developing an app that recognizes faces and displays medical information about the person. My goal is to show the information in AR, in a ViewRenderable, near the person's face. Is there a way to use the face as an anchor, as we do in the case of plane-based AR?

The Augmented Faces examples only show ModelRenderables being used, which don't really help in my situation.


Solution

  • Probably the best way to accomplish your task is to use getCenterPose() function of AugmentedFace class to track the center of the face and place your content relative to this pose. Another way is to add an overlay to the ARCore fragment with a section where you can put the contextual medical information when your app recognize a face.

    I haven't used augmented faces feature but I have the doubt that it cannot discriminate between different human faces without using another framework that is focused on machine learning/deep learning. Just do some checks about this.

    The anchor needs to be placed on a specific real space position that is generated by immutable features of the image acquired by camera. Immutable features means that the anchor is strongly coupled to what you are seeing at specific time when you place it.

    So it's not a good idea to use an object that moves a lot over time,because it can change the number of features of an image and probably will cause a problem to the phone tracking process. I thinks this is one of the reason behind the fact that you place an object in real space only after you place an anchor. Also the docs say that it's not possible to place an anchor over an augmented face.enter link description here

    Hope this gives you some hints.