Search code examples
unity-game-enginemrtk

Unity MRTK: Terrain Interaction


I'm looking for some guidance from anyone who maybe has had luck interacting with Unity Terrain and the MRTK.

I'm using Online Maps and I'm trying to port over an app into the Hololens 2. Everything is in place, except I can't seem to trigger a click on the terrain--which is central to what I need to do.

Basically I render a terrain of a set geographic location, and wherever the user clicks on the terrain, I store those coordinates and will generate a 3d model at that location for other users to see.

In the editor during Play, if I click on the terrain with the mouse all is well. However, if I try to use the gaze circle, I can see the gaze collides with the terrain fine, and I can see the circle shrink during a mouse click, but the click event doesn't fire off ( the actual mouse is off the terrain at this point--which is why nothing happens ). Using the space bar, and the hand stand-in, the cast ray doesn't hit the terrain at all--which is exactly what I see when I build and deploy to the headset, the terrain itself acts as if there is no collider, and is all but ignored by all interactions.

I have tried every possible combination of Interactable states and I'm only ever able to generate a very basic state of the terrain was clicked, which ignores "where" the terrain was clicked and is the key to what I need to achieve.

In essence, I need to figure out how to replicate a mouse click by either the ray cast line in the headset, or by actually touching the terrain.

Side note, I've noticed that when I hold the space bar and use the 3d hand in the editor, I can't interact with my buttons either. These are the same button prefabs used in MRTK examples, and can interact in the editor if I use the gaze circle. I don't know, maybe if I can figure out how to make that interact with the buttons, it might start me in the right direction to getting it to interact with the terrain.


Solution

  • So after a lot of digging around, here is the solution I came up with.

    First the component code which is attached to an empty gameobject:

    using Microsoft.MixedReality.Toolkit;
    using Microsoft.MixedReality.Toolkit.Utilities;
    using UnityEngine;
    
    public class GetCoordHandler : MonoBehaviour
    {
        private Vector3 gazePoint, gazeNormal;
    
        private void Update()
        {
            // key z in editor, A button on xbox controller
            if (Input.GetKeyDown(KeyCode.Z) || Input.GetKeyDown(KeyCode.Joystick1Button0))
            {
                // make sure we have hit something
                if (CoreServices.InputSystem.GazeProvider.GazeTarget)
                {
                    // grab where the gaze pointer is
                    gazePoint = CoreServices.InputSystem.GazeProvider.HitInfo.point;
                    // convert this position to screen coords and fire the method
                    AddMarker(CameraCache.Main.WorldToScreenPoint(gazePoint));
            }
        }
    }
    
    private void AddMarker(Vector3 point)
    {
        // Get the coordinates under the cursor.
        double lng, lat;
        // custom method that returns the coordinates
        OnlineMapsControlBase.instance.GetCoords_MRTK(point, out lng, out lat);
    
        Debug.Log("Lat: " + lat + "  Lng: " + lng);
        // Create a label for the marker.
        string label = "Marker: " + (OnlineMapsMarkerManager.CountItems + 1);
        // Create a new marker.
        OnlineMapsMarkerManager.CreateItem(lng, lat, label);
     }
    }
    

    And finally a simple custom method for OnlineMaps that uses the passed in Vector3 from the Gaze instead of using the mouse position.

     public bool GetCoords_MRTK(Vector2 gazePosition, out double lng, out double lat)
     {
        return GetCoords(gazePosition, out lng, out lat);
     }