Sign Language Recognition on Video Data Based on Graph Convolutional Network

Ayas Faikar Nafis, Nanik Suciati

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Sign language is a very important means of communication for the deaf and the mute. Therefore, it is necessary to automatically recognize sign language by a computer so that non-disabled people can understand the sign language that is used. Many studies on sign language recognition have been carried out, one of which is the sign language alphabet recognition using the Convolutional Neural Network (CNN). However, CNN cannot represent a skeletal data structure that has the graph form. The Graph Convolutional Network (GCN) is a generalization of CNN that can perform feature extraction from graphs in non-Euclidean space. GCN is widely used in action recognition research such as the Shift-GCN method. This study used hand joints position estimated by MediaPipe Hands that shaped like a graph. The graph is processed using the modified Shift-GCN that introduces a shift weighting approach based on the vertices adjacency. The dataset used in this study is hand keypoints extracted from video data of 26 American Sign Language (ASL) alphabets. Based on the experimental results, the proposed method achieved the best accuracy of 99.962%.

Original languageEnglish
Pages (from-to)4323-4333
Number of pages11
JournalJournal of Theoretical and Applied Information Technology
Volume99
Issue number18
Publication statusPublished - 2021

Keywords

  • Alphabet recognition
  • Graph convolutional network
  • Shift-gcn
  • Sign language
  • Skeletal data

Fingerprint

Dive into the research topics of 'Sign Language Recognition on Video Data Based on Graph Convolutional Network'. Together they form a unique fingerprint.

Cite this