Search code examples
mrtk

Why is there or is there a difference between pointer and input?


While converting some code to the new MRTK RC1 I recognized two versions of e.g. Up and Down events, both for input and pointer. Now I was wondering, what the difference is? Why is this difference? Do I need to implement both if I want the same application running on desktop (mouse uses the Input version) and XR devices (pointer version)?


Solution

  • Great question! My understanding of the difference between InputUp/Down and PointerUp/Down is about where the events come from. In short, I would recommend listening for pointer events instead of the raw input events. You do not need to listen to both types of events.

    InputDown/Up vs. PointerDown/Up

    InputUp/Down are generated by controllers. For example, if you look at the classes that call MixedRealityInputSytem.RaiseOnInputDown, you will see the following code making calls to RaiseOnInputDown:

    GenericJoystickController.cs
    WindowsMixedRealityController.cs
    MouseController.cs
    

    In other words, these are 'Raw inputs' that represent things like 'the select button was pressed on a controller.

    In contrast, OnPointerDown/Up are raised by pointers. For example, if you look for reference to MixedRealityInputSystem.RaisePointerDown, you will see the following files:

    GazeProvider.cs
    BaseControllerPointer.cs
    GGVPointer.cs
    PokePointer.cs
    

    In other words, these are higher level inputs that come from different kinds of pointer dispatchers -- e.g. from near interaction pointers (sphere pointers), far interaction pointers (line pointers), or touch pointers (poke pointers).

    Why listen for PointerDown/Up instead of InputDown/Up

    Listening for pointer down and up from pointers will allow you to distinguish between things like near and far interaction since you can look at the pointer that sent the event and check if the pointer implements IMixedRealityNearPointer.