A quick look at Touch Handling APIs in Game Engines
Friday, November 20, 2020Browser - Gold standard of Touch APIs
- https://developer.mozilla.org/en-US/docs/Web/API/Touch_events
- https://developer.mozilla.org/en-US/docs/Web/API/TouchEvent
- https://developer.mozilla.org/en-US/docs/Web/API/Touch
The browser is not a "game engine" but there are games that run in browsers and it has solid handling of touch input that is used across devices every day in different ways. Mostly, though, they are handled through DOM elements and not directly. But let's look at how their direct handling can be done.
Browser handles 4 touch events:
- touch start: when the finger starts touching the browser content area
- touch move: if the finger moves after touching the browser content area
- touch end: whether the finger moved or not when it's released from the browser content area
- touch cancel: when the browser tab changes, the finger goes to the browser UI or something else
In code, subscribing to these events requires passing the function that will handle them, and the boolean at the end gives information on how that event propagates. In the browser is also possible to specify the element that is susceptible to that event - these are not specifics for touch but give interesting functionalities.
function startup() {
var el = document.getElementById("canvas");
el.addEventListener("touchstart", handleStart, false);
el.addEventListener("touchend", handleEnd, false);
el.addEventListener("touchcancel", handleCancel, false);
el.addEventListener("touchmove", handleMove, false);
}
Each event passes an array of Touch
elements, each with the following parameters: identifier
, clientX
, clientY
, pageX
, pageY
, screenX
, screenY
, target
.
More elements are available experimentally depending on the browser.
The identifier property is a unique integer for each touch and remains
consistent for each event during the duration of each finger's contact
with the surface.
Desktop browsers, even when a touch screen is available, can disable these events. A different API for Pointer Events exists and is recommended to use instead, and mixes Mouse Input, Touch input, and Pen input in the same event type.
Unity - The most complete API
In Unity, you can check Input.touchCount
to figure out how many touchpoints are available at that frame, and retrieve each one using Input.GetTouch(i)
.
Each touch will have the following properties: fingerId
, phase
, position
, deltaTime
, radius
, pressure
, rawPosition
, deltaPosition
, azimuthAngle
, altitudeAngle
, maximumPossiblePressure
, radiusVariance
and type
(whether a touch was of Direct, Indirect (or remote), or Stylus type).
The following are the possible touch phases: Began
(A finger touched the screen), Moved
(A finger moved on the screen), Stationary
(A finger is touching the screen but hasn't moved), Ended
(A finger was lifted from the screen. This is the final phase of a touch) or Canceled
(The system cancelled tracking for the touch).
Because both the index i
used in Input.GetTouch(i)
and fingerId
are integers, in some touch APIs that require passing a integer is not obvious what to use. For example EventSystem.current.IsPointerOverGameObject(int)
actually requires passing a touch.fingerId
.
GameMaker - Multiple mouses approach!
- https://docs2.yoyogames.com/index.html?page=source%2F_build%2F3_scripting%2F4_gml_reference%2Fcontrols%2Fdevice%20input%2Findex.html
- https://docs2.yoyogames.com/source/_build/2_interface/1_editors/events/gesture_events.html
Game Maker has two separate APIs for dealing with touch. One is the same used for mouse input, but they work the same way a PC would work if it had multiple mice attached,
which makes it different from what we have seen so far. You pass
a device number that goes from 0 to n, where the number of devices
available appears to be 5 - but I can't find the function that retrieves
this number. Then 0 is the first finger that touched the screen (or the
mouse in a computer) and 1 is the second one and so on, and you can use a
function like device_mouse_check_button(device, button);
for retrieving if either the finger is pressed (by checking left click)
or the mouse is clicking, with the device being this number that ranges from
0 to n.
The following functions are available: device_mouse_check_button
, device_mouse_check_button_pressed
, device_mouse_check_button_released
, device_mouse_x
, device_mouse_y
, device_mouse_raw_x
, device_mouse_raw_y
, device_mouse_x_to_gui
, device_mouse_y_to_gui
, and device_mouse_dbclick_enable
.
An additional, second API is available that gives access to all common touch gestures and makes them easy to use them. These include tap, drag, flick, pinch, and rotate finger events.
Godot - A minimal approach
- https://docs.godotengine.org/en/stable/classes/class_inputeventscreentouch.html
- https://docs.godotengine.org/en/stable/classes/class_touchscreenbutton.html
Godot has very lean InputEvent for screen touch, with three parameters: index
(a number from 0 to n identifying a finger), position
(x and y point in the screen), and a boolean named pressed
that is false when the finger is released.
At the same time, since it's an InputEvent, it's can propagate through the SceneTree similar to how the browser allows handling events in elements in the DOM. This makes this simple approach still very powerful.
Additionally, Godot also provides TouchScreenButton to design buttons meant to receive the touch of multiple fingers.
Unreal - Traditional with a timestamp
- https://docs.unrealengine.com/en-US/API/Runtime/Engine/GameFramework/APlayerController/InputTouch/index.html
- https://docs.unrealengine.com/en-US/BlueprintAPI/Input/TouchInput/index.html
Unreal TouchEvents are similar to the ones we have seen so far, the touch has a unique Handle
per finger, it happens at a TouchLocation
(x,y position), it has a Type
(similar to previous seen phase), it has a float indicating a Force
and it has a DeviceTimestamp
so you can precisely know at which time the specific touch occurred.
A touch Type can be one of the following enum values Began
, Moved
, Stationary
, ForceChanged
, FirstMove
, Ended
, NumTypes
(used when iterating across other types).
Overall, it's a very standard API. Events are also available for On Input Touch Begin
, On Input Touch End
and others.
Ogre3D - Very similar to SDL2
Ogre3D provides TouchFingerEvent where each finger has a unique integer fingerId
, a type
, an x
and y
position, and additionally a dx
and dy
is provided for the delta of that position.
Love 2D - Polling and events and Lua
- https://love2d.org/wiki/love.touch
- https://love2d.org/wiki/love.touch.getPosition
- https://love2d.org/wiki/love.touchpressed
Love2D allows to both poll for the touch on an engine update or to use a function to work with touch events.
When polling, love.touch.getTouches()
retrieves a table named touches
, which has a list of id
s for each touch. The position of a touch can then be retrieved by using local x, y = love.touch.getPosition(id)
for each id. Because Love2D uses Lua, polling is very fast.
The events available are love.touchpressed
, love.touchmoved
and love.touchreleased
. In each of them, the passed parameters are id
, x
, y
, dx
, dy
and pressure
.
Ren'Py - Gestures
Using gestures instead of handling the position of where a touch happens is a very different approach, Ren'Py docs explain this neatly:
The gesture recognizer first classifies swipes into 8 compass directions, "
n
", "ne
", "e
", "se
", "s
", "sw
", "w
", "nw
". North is considered to be towards the top of the screen. It then concatenates the swipes into a string using the "_
" as a delimiter. For example, if the player swipes down and to the right, the string "s_e
" will be produced.
This allows building gestures in string, config.gestures
is available as a dictionary of sorts where a string representing a gesture maps to a function (an action): define config.gestures = { "n_s_w_e_w_e" : "progress_screen" } .
Conclusion
When researching for this topic I looked into HaxeFlixel Actions, and while there's no obvious way of handling touch there other than the mouse, it has a very interesting system of matching input to a function that will execute its gameplay result and it seems similar to what is available in both Ren'Py and Game Maker premade gestures. Still, gesture handling in the engine seems to be a minority.
Looking at the touch APIs of most engines above, they seem decided to give you the meaningful data available that is as close as the user input and let you handle and interpret it as you wish. In the early days of multi-touch APIs I believe I saw even one x and y position per "pixel" touched which was very demanding to go through. By giving a position per finger per frame it's data that can be handled without as much effort and from the documentation it appears to be what is being made available in the current APIs. In some engines though, the position is a normalized float between 0 and 1 and you are on your own to convert this in your world coordinates.
Godot seem to work with the leanest at just an ID per finger, a position, and information so you can tell when the finger is down and when it leaves the screen. If you are looking to the minimal you can have to work with mobile devices when building your game, I believe it hits it.