Im running server on hololens, and i created a PC app that works as a client. Via client i can drag, rescale, rotate etc. holograms created in unity.
The holograms have some function that starts when user makes an air tap gesture on them. I want to be able to start those functions in my pc app. The easiest way to do it is just to send an info from app to hololens to do certain thing but...
I want to run those functions specificly as if the air tap gesture happened. To do so i need to simulate an air tap gesture. I want to send the coordinates from pc app to hololens, and i want my scene to behave like there was an actual air tap gesture executed without executing that gesture in real life. Similiar thing can be done on PC version of Windows10 and its described for example here https://superuser.com/questions/159618/simulating-mouse-clicks-at-specific-screen-coordinates.
There is my question to you. Is it possible to simulate gestures? I would realy appretiate any info you can share with me. Thanks.
If you are using the standard MRTK Gaze stuff to handle your gestures, you could define a new IInputSource.
The example for adding Gamepad input could be a good starting point - instead of triggering an air tap when a gamepad button is pressed, trigger it in response to remote calls from your PC app.
The benefit of this is that it's in keeping with the existing input system - your code which acts on input doesn't need to know that it came from a gamepad, a hand, or your desktop app.