In Unity say you need to detect finger touch (finger drawing) on something in the scene.
The only way to do this in modern Unity, is very simple:
Step 1. Put a collider on that object. ("The ground" or whatever it may be.) 1
Step 2. On your camera, Inspector panel, click to add a Physics Raycaster (2D or 3D as relevant).
Step 3. Simply use code as in Example A below.
(Tip - don't forget to ensure there's an EventSystem ... sometimes Unity adds one automatically, sometimes not!)
Fantastic, couldn't be easier. Unity finally handles un/propagation correctly through the UI layer. Works uniformly and flawlessly on desktop, devices, Editor, etc etc. Hooray Unity.
All good. But what if you want to draw just "on the screen"?
So you are wanting, quite simply, swipes/touches/drawing from the user "on the screen". (Example, simply for operating an orbit camera, say.) So just as in any ordinary 3D game where the camera runs around and moves.
You don't want the position of the finger on some object in world space, you simply want abstract "finger motions" (i.e. position on the glass).
What collider do you then use? Can you do it with no collider? It seems fatuous to add a collider just for that reason.
What we do is this:
I just make a flat collider of some sort, and actually attach it under the camera. So it simply sits in the camera frustum and completely covers the screen.
(For the code, there is then no need to use ScreenToWorldPoint, so just use code as in Example B - extremely simple, works perfectly.)
My question, it seems a bit odd to have to use the "under-camera colldier" I describe, just to get touches on the glass.
What's the deal here?
(Note - please don't answer involving Unity's ancient "Touches" system, which is unusable today for real projects, you can't ignore .UI using the legacy approach.)
Code sample A - drawing on a scene object. Use ScreenToWorldPoint.
using UnityEngine;
using System.Collections;
using UnityEngine.EventSystems;
public class FingerMove:MonoBehaviour, IPointerDownHandler, IDragHandler, IPointerUpHandler
{
public void OnPointerDown (PointerEventData data)
{
Debug.Log("FINGER DOWN");
prevPointWorldSpace =
theCam.ScreenToWorldPoint( data.position );
}
public void OnDrag (PointerEventData data)
{
thisPointWorldSpace =
theCam.ScreenToWorldPoint( data.position );
realWorldTravel =
thisPointWorldSpace - prevPointWorldSpace;
_processRealWorldtravel();
prevPointWorldSpace = thisPointWorldSpace;
}
public void OnPointerUp (PointerEventData data)
{
Debug.Log("clear finger...");
}
Code sample B ... you only care about what the user does on the glass screen of the device. Even easier here:
using UnityEngine;
using System.Collections;
using UnityEngine.EventSystems;
public class FingerMove:MonoBehaviour, IPointerDownHandler, IDragHandler, IPointerUpHandler
{
private Vector2 prevPoint;
private Vector2 newPoint;
private Vector2 screenTravel;
public void OnPointerDown (PointerEventData data)
{
Debug.Log("FINGER DOWN");
prevPoint = data.position;
}
public void OnDrag (PointerEventData data)
{
newPoint = data.position;
screenTravel = newPoint - prevPoint;
prevPoint = newPoint;
_processSwipe();
}
public void OnPointerUp (PointerEventData data)
{
Debug.Log("FINEGR UP...");
}
private void _processSwipe()
{
// your code here
Debug.Log("screenTravel left-right.. " + screenTravel.x.ToString("f2"));
}
}
1 If you're just new to Unity: at that step very likely, make it a layer called say "Draw"; in physics settings make "Draw" interact with nothing; in step two with the Raycaster just set the layer to "Draw".
The question and the answer I am going to post seems pretty much opinion based. Nevertheless I am going to answer as best as I can.
If you are trying to detect pointer events on the screen, there is nothing wrong with representing the screen with an object. In your case, you use a 3D collider to cover the entire frustum of the camera. However, there is a native way to do this in Unity, using a 2D UI object that covers the entire screen. The screen can be best represented by a 2D object. For me, this seems like a natural way to do it.
I use a generic code for this purpose:
public class Screen : MonoSingleton<Screen>, IPointerClickHandler, IDragHandler, IBeginDragHandler, IEndDragHandler, IPointerDownHandler, IPointerUpHandler, IScrollHandler {
private bool holding = false;
private PointerEventData lastPointerEventData;
#region Events
public delegate void PointerEventHandler(PointerEventData data);
static public event PointerEventHandler OnPointerClick = delegate { };
static public event PointerEventHandler OnPointerDown = delegate { };
/// <summary> Dont use delta data as it will be wrong. If you are going to use delta, use OnDrag instead. </summary>
static public event PointerEventHandler OnPointerHold = delegate { };
static public event PointerEventHandler OnPointerUp = delegate { };
static public event PointerEventHandler OnBeginDrag = delegate { };
static public event PointerEventHandler OnDrag = delegate { };
static public event PointerEventHandler OnEndDrag = delegate { };
static public event PointerEventHandler OnScroll = delegate { };
#endregion
#region Interface Implementations
void IPointerClickHandler.OnPointerClick(PointerEventData e) {
lastPointerEventData = e;
OnPointerClick(e);
}
// And other interface implementations, you get the point
#endregion
void Update() {
if (holding) {
OnPointerHold(lastPointerEventData);
}
}
}
The Screen
is a singleton, because there is only one screen in the context of the game. Objects(like camera) subscribe to its pointer events, and arrange theirselves accordingly. This also keeps single-responsibility intact.
You would use this as appending it to an object that represents the so called glass (surface of the screen). If you think buttons on the UI as popping out of the screen, glass would be under them. For this, the glass has to be the first child of the Canvas
. Of course, the Canvas
has to be rendered in screen space for it to make sense.
One hack here, which doesn't make sense is to add an invisible Image
component to the glass, so it would receive events. This acts like the raycast target of the glass.
You could also use Input
(Input.touches
etc.) to implement this glass object. It would work as checking if the input changed in every Update
call. This seems like a polling-based approach to me, whereas the above one is an event-based approach.
Your question seems as if looking for a way to justify using the Input
class. IMHO, Do not make it harder for yourself. Use what works. And accept the fact that Unity is not perfect.