Search code examples
swiftxcodeswiftuirealitykitvisionos

How Do I Execute Action On Hover for Model Entities in XCode for visionOS Development?


I am a newbie in developing apps for Apple ecosystem. Currently, I'm working on a visionOS app, and I'd like to get model entities to be highlighted on eye focused, and back to normal when unfocused, something like a 3D button. The last thing I want to implement is that when focused/unfocused, I want it to call some functions as well.

I was able to get the individual entity to be highlighted on focused, however, the callback action when focused/unfocused is yet to be solved.

The following code is my attempt to the problem:

var body: some View {
        ZStack {
            RealityView { content in
                // Loop through LightIDs enum and create model entities
                for lightID in LightIDs.allCases {
                    let model = ModelEntity(
                        mesh: .generateSphere(radius: uniformedScale),
                        materials: [SimpleMaterial(color: .red, isMetallic: false)]
                    )
                    
                    let x = Float.random(in: -1.0 ... 1.0)
                    model.position = SIMD3(x, 1.0, -2.0)
                    
                    model.name = lightID.rawValue
                    model.components.set(InputTargetComponent())
                    model.components.set(HoverEffectComponent())
                    model.components.set(CollisionComponent(shapes: [.generateSphere(radius: uniformedScale)]))
                    content.add(model)
                    
//                    model.onHover { hover in <-- error ModelEntity has no member onHover
//                        if hover {
//                             print("Mouse hover: \(model.name)")
//                             // do something else here
//                        }
//                    }
                }
            }
        }
}

If anyone has any insight to this issue, that would be greatly appreciated.


Solution

  • You cannot take action when the user “hovers” on (looks at) a UI element, because eye tracking information is not available to apps. From Apple Vision Pro Privacy Overview:

    Where you look is not shared with apps because the content we look at, and how long we look at it, may reveal our thought process. visionOS processes eye movements at the system level, and doesn’t share where you are looking, or your eye input, with apps or websites before you engage with content. As a result, apps and websites only know what content you select when you tap your fingers together, not what you look at but don’t select.

    And:

    How apps respond to where you look

    We know that it’s important for you to be aware of content you’re about to tap on before you tap. As a result, you can tell what you’re about to select on Apple Vision Pro without sharing where you are looking with apps.

    visionOS automatically highlights buttons that you look at, without app developers needing to know where you look. For example, if you are looking at a button in an app, visionOS may provide some visual indication like making the button glow. Only when you select the button, by both looking at it and tapping your fingers together, does where you are looking get communicated to the app. Visuals effects that respond to where you look, like a glowing button, are rendered out of process from the app. As a result, the apps you are using are not rendering the effects you see when you look at content — visionOS renders these animations because the apps you use do not know what you are looking at until you make a selection.