Multimodal gesture recognition based on choquet integral

K. Hirota*, H. A. Vu, P. Q. Le, C. Fatichah, Z. Liu, Y. Tang, M. L. Tangel, Z. Mu, B. Sun, F. Yan, D. Masano, O. Thet, M. Yamaguchi, F. Dong, Y. Yamazaki

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Citations (Scopus)

Abstract

A multimodal gesture recognition method is proposed based on Choquet integral by fusing information from camera and 3D accelerometer data. By calculating the optimal fuzzy measures for the camera recognition module and the accelerometer recognition module, the proposal obtains enough recognition rate 92.7% in average for 8 types of gestures by improving the recognition rate approximate 20% compared to that of each module. The proposed method aims to realize the casual communication from humans to robots by integrating nonverbal gesture messages and verbal messages.

Original languageEnglish
Title of host publicationFUZZ 2011 - 2011 IEEE International Conference on Fuzzy Systems - Proceedings
Pages772-776
Number of pages5
DOIs
Publication statusPublished - 2011
Externally publishedYes
Event2011 IEEE International Conference on Fuzzy Systems, FUZZ 2011 - Taipei, Taiwan, Province of China
Duration: 27 Jun 201130 Jun 2011

Publication series

NameIEEE International Conference on Fuzzy Systems
ISSN (Print)1098-7584

Conference

Conference2011 IEEE International Conference on Fuzzy Systems, FUZZ 2011
Country/TerritoryTaiwan, Province of China
CityTaipei
Period27/06/1130/06/11

Keywords

  • 3D Accelerometer
  • Choquet Integral
  • Gesture Recognition
  • Human-Robot Interaction
  • Sensor Fusion

Fingerprint

Dive into the research topics of 'Multimodal gesture recognition based on choquet integral'. Together they form a unique fingerprint.

Cite this