Hello developers and interested applicants 🙂
The first iteration of our SDK introduces a whole set of jargon, concepts and novel approaches in gestural interaction. For most people who are unversed with the technical knowledge behind our gesture technology, fret not. Think of our SDK as a well-equipped toolbox, with an instruction manual that is being constantly refined. This article summarizes our SDK and introduces to you some of the key things to note.
Simple application demo, designed and created using the Unity-Plugin for our SDK
Knowledge and Concepts
We have defined very specific classes and categories within our framework of gesture recognition in order to provide a very robust and meaningful structure towards understanding our methodology and applying it in game and application development. We specifically define:
 The categories of gestures supported by the SDK : ManoClasses (low-level programming)
 The type of gestures supported by the SDK : ManoGestures (high-level programming)
ManoClasses are defined in low-level programming language and provide the most abstract and elementary information. This allow skilled developers to fundamentally establish their own gesture recognition frameworks. They are classified into 4 major categories of gestures: Front Grab, Back Grab, Pinch and Point. Within each class are “States”, which are the specific hand poses that occur (in intervals) within each category of gesture / ManoClass. Skilled, Low-Level Developers might find this information beneficial to understand and could learn how to create their own hand-tracking and gesture recognition frameworks.
“Tap” Trigger ManoGesture
ManoGestures are defined in high-level programming language and are subsets of ManoClasses. They represent the gestures we currently are able to track in high accuracy, classified as either Trigger (Quick-Movement) or Continuous (Static or Translating) gestures. Developers can use these Gesture Recognition Frameworks readily and implement ManoGestures directly into their games and apps!
A core concept in our framework is in achieving proper hand detection. For the layperson, this refers to the meaningful separation of the hand and hand features from the background within the camera’s field of view. This process called segmentation is triggered by the “Calibration” process of the SDK.
Calibration is probably the most critical step to ensuring good hand detection and is mainly dependent on two main ideas,  Lighting Conditions and  Environment Complexity.
Developers should note that the detection of the hand is affected by the nature and extent of lighting cast over the hand and in relation to the background. A good gauge of favorable lighting is when light is well distributed over the hand. Do avoid having directional light and strong shadows cast on the hand. Additionally, consistent (as opposed to changing) lighting conditions would also ensure that detection is not significantly lost over time, requiring re-calibrations.
Realtime hand-tracking displayed as a contour representation with outline and inner-points
Hand detection is also affected by the background from which the hand is to be segmented from. The idea is to achieve the best contrast between the hand and the background. In order to facilitate this, our SDK comes with different colour presets (Default, Yellow, Red etc). Choose the right background settings wisely and in accordance to the background colour/complexity. * An automatic mode will be released in a subsequent update, which will optimize the background selection automatically.
The main advantage of ManoMotion’s technology is that it only requires information from a simple 2D camera, present in most (if not all) smartphones. However, this also means that a user has to be wary of the orientation of the camera-enabled device, its relative position on the smart-device and it’s field of view. Hence, a simple principle to follow would be that the SDK requires the hand to be clearly visible and the dominant object within the camera’s field of view.
Good Practice for Developers
- Hand should be in parallel to the Camera.
- You should be able to see your full hand.
- Hand angle in relation to camera is important. Therefore, you should orientate your hand such that the hand is well exposed under the camera’s field of view. Ensure the camera falls on the left of the device (i.e in landscape view)
- Keeping the right distance is key. A common occurrence is when the hands are are either too close or too far from the camera, therefore appearing either overprominent in the camera’s field of view (such that some of the hand is not seen) or too obscure (the hand is too small to be detected by the SDK).
*Developers should keep the above pointers in mind when determining gameplay and correcting features & implementations in their applications.
Additionally, here’s a list of information our SDK currently offers:
- ManoClasses & detected ManoGestures
- Relative Depth information of Hand
- Orientation of Hand
- Information + Visual Representation of Hand Contour & Inner Points
- Fingertip & Center-of-Palm Coordinates and Representation
- Warning Flags for errors
- Raw Data Export
And Voila! We hope this summary provides and overview of our SDK and what it offers. We constantly update our resources and knowledge base through our documentation, tutorials and forum to reinforce your understanding. If you are keen to try out our SDK, and haven’t signed up already, you can do so here!