Sign language is a very important means of communication for the deaf and the mute. Therefore, it is necessary to automatically recognize sign language by a computer so that non-disabled people can understand the sign language that is used. Many studies on sign language recognition have been carried out, one of which is the sign language alphabet recognition using the Convolutional Neural Network (CNN). However, CNN cannot represent a skeletal data structure that has the graph form. The Graph Convolutional Network (GCN) is a generalization of CNN that can perform feature extraction from graphs in non-Euclidean space. GCN is widely used in action recognition research such as the Shift-GCN method. This study used hand joints position estimated by MediaPipe Hands that shaped like a graph. The graph is processed using the modified Shift-GCN that introduces a shift weighting approach based on the vertices adjacency. The dataset used in this study is hand keypoints extracted from video data of 26 American Sign Language (ASL) alphabets. Based on the experimental results, the proposed method achieved the best accuracy of 99.962%.
|Number of pages||11|
|Journal||Journal of Theoretical and Applied Information Technology|
|Publication status||Published - 2021|
- Alphabet recognition
- Graph convolutional network
- Sign language
- Skeletal data