3D Gesture Recognition Based on Handheld Smart Terminals

3D Gesture Recognition Based on Handheld Smart Terminals

Yunhe Li, Yi Xie, Qinyu Zhang
Copyright: © 2018 |Pages: 16
DOI: 10.4018/IJACI.2018100106
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

With the popularity of smart devices, it has become impossible for traditional human-computer interaction techniques to accommodate people's needs. This article proposes an iOS-based three dimensional (3D) gesture recognition system, gathering users' specific gestures from their handheld smart terminals to judge implications of these gestures, so to control other smart terminals with more natural human-computer interactions. In this article, gestures were recognized by reading data about corresponding 3D gesture data with motion sensors of smart terminals using optimized dynamic time warping (DTW) algorithm. As to this algorithm, curve paths were delimited via slope based on features of mobile devices and dynamic programming. Meanwhile, this algorithm reduced computational load for template matching and costs of gesture recognition by preliminarily storing upper and lower boundaries of delimited areas with linked lists or setting distortion thresholds. In this article, efficiency and precision of recognition schemes were tested and verified on cellphones. The results suggested that the improved algorithm was less time-consuming than classical algorithms, and required less time for computational load for template matching. Furthermore, it was demonstrated that the gesture recognition based on dynamic template matching algorithms, with higher recognition efficiency and precision, could bring better experiences of human-computer interactions.
Article Preview
Top

Introduction

With the development of computer technologies and maturity of recognition techniques, gesture has tended to become a part of human-computer interactions and gesture recognition techniques have received increasingly widespread concerns. Nowadays, more and more applications have been developed basing on mobile platforms, made a growing amount of more convenient services available for users. In daily life of human beings, gesture interaction has become one of common natural and intuitive communications that can convey special meanings under many particular scenes. Compared with keyboards, mouses and remote controllers of conventional human-computer interactions, the proposed method of this paper for making gestures with handheld smart terminals can improve human-computer interactions, thus make free natural interactions possible for users.

At present, some research outcomes have been achieved in sensor-based gesture recognition. Hiyadi, Ababsa, Montagne, Bouyakhf, and Regragui (2015) propose a recognition technique of 3D dynamic gesture for human robot interaction (HRI) based on depth information provided by Kinect sensor. Ding, Zhang, Chen, Chen, and Wu (2015) used Hidden Markov Model (HMM) algorithm to model and classify gestures. Sudha, Sriraghav, Sudar, Jacob and Manisha (2017) designed a 14-patch gesture partition method, integrated into the vision-based gesture recognition framework to develop desktop applications, tracking hand gestures in three-dimensional space and using simple contour models to match gestures, and thus supporting complex real-time interactions. Acharjya and Anitha (2017) proposed an algorithm framework for processing acceleration and surface electromyography (SEMG) signals for hand gesture recognition. Ikeda, Araki, Dey, Bose, Shafique, and Elbaz, et al. (2014) proposed a novel gesture recognition scheme for Leap Motion data. A feature set based on the position and orientation of the fingertip is computed and sent to the SVM classifier in order to identify the executed gesture. Acharjee, Chakraborty, Karaa, Azar and Dey (2014) proposed an algorithm, using the 3D convolutional neural network challenge, the depth and intensity data of the driver gesture recognition algorithm were implemented on the VIVA challenge dataset with a 77.5% correct classification rate. Surekha, Nazare, Raju and Dey (2017) described a template-based recognition method that uses sequential Monte Carlo inference techniques to align input gestures at the same time. Contrary to the standard template-based approach based on dynamic programming (such as dynamic time warping), their algorithms have the adaptive process of real-time tracking hand gesture changes. Hore, Chatterjee, Santhi, Dey, Ashour and Balas, et al. (2017) proposed a novel and effective descriptor, the Histogram of 3D Facets (H3DF), to encode the 3D shape information explicitly using depth maps. Tsai et al. (2015) proposed a system designed to easily access daily information without the need for mouse and keyboard operations to reduce the step of receiving information. And in 2016, Cheng, Yang, and Liu presented a survey of some recent works on 3D depth hand gesture recognition. For the time being, research on sensor-based gestures has mostly focused on sending collected data about gestures to PC and then recognizing gestures using light-weighted equipment. However, this recognition model is too dependent upon PC in data processing and classification, as a result of which application of gestures is restricted.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing