Hand tracking and gesture recognition open up new possibilities for developers, brands, and users. Being able to track the hands’ position and use hand gestures as input allows for a richer user experience and provides new ways of interacting with devices.
Hand tracking and hand detection
Hand tracking makes it possible to track the position and movement of the hands. This can be very useful in safety applications such as the one that saw manufacturer Altendorf has developed. Their handguard system instantly shuts down the saw when it detects hands too close to the saw blade. The apparent advantage of this, of course, is that it saves the operators’ fingers. But hand injuries not only cause suffering, but the overall cost implications are also colossal.
Though “simply” detecting the hand’s position has enormous advantages, as for a saw operator who uses Altendorf’s security system, we can take it one step further by adding gesture recognition and gesture control.
Gesture recognition to control apps and devices
With gesture recognition, you can detect the position of the hand and the movement and gestures of the fingers. Used smartly, gesture recognition and control introduce new ways of interacting with devices and applications.
For example, some cameras already use hand tracking and gestures recognition to control basic functions. Use a pinching gesture to zoom in and out or an open or closed fist to start and stop filming. An excellent example of this is the OBSBot Tiny.
Gestures are one of the most intuitive ways to interact with computers – it feels natural for people who don’t use keyboards or mice anymore, but it also makes sense for those who don’t want to take their hands off the keyboard to touch a trackpad while typing text. Imagine, for example, the health benefits for a doctor in an operating room if she could control a computer without having to touch the keyboard or mouse?
Gesture recognition and augmented reality
Another use case of gesture recognition is in augmented reality, where you can incorporate digital assets for a more immersive user experience. A virtual ring try-on is just one example of where you could use hand tracking and gesture recognition.
Brands and retailers can allow their customers to do a virtual ring try-on online, in the comfort of their own homes.
How gesture recognition works
Touchless gesture recognition lets you use your hand to interact with a device or app. Instead of typing on a keyboard or using a touch screen, a motion sensor can detect and interpret your actions as the primary way to input data.
Simplified, this is how gesture recognition works:
- A camera detects movements and feeds image data to the computer or mobile device,
- The SDK interprets the image data to identify movements and gestures,
- An SDK matches gestures from a predetermined gesture library,
- Once the gesture is solved, a command correlated to that specific gesture may be executed.
Hand-tracking and gesture control are easy…
Getting basic detection and gesture control are not overly complex concepts. Many developers can build their applications using open-source libraries and frameworks.
However, it is entirely different to build a hand-tracking application that is lightweight and robust enough to accurately and instantaneously detect and match many different hands and gestures. If you want to develop a commercially viable application, then “basic” won’t do it. This is where our Mobile AR SDK comes in handy (pun intended).
We have invested thousands of person-hours, money, and resources into making a quick and accurate SDK. Our database of hands and gestures is probably one of the most extensive ones available to cover many use cases. If you are interested in learning more, please read our white paper “The promise of zero-latency hand tracking“
If you are a developer who wants to learn more about gesture control and AR, or if you are serious about building your app, we also recommend that you download the SDK and see where it takes you.