The study of attention estimation for child-robot interaction scenarios

Muhammad Attamimi*, Takashi Omori

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

One of the biggest challenges in human-agent interaction (HAI) is the development of an agent such as a robot that can understand its partner (a human) and interact naturally. To realize this, a system (agent) should be able to observe a human well and estimate his/her mental state. Towards this goal, in this paper, we present a method of estimating a child's attention, one of the more important human mental states, in a free-play scenario of child-robot interaction (CRI). To realize attention estimation in such CRI scenario, first, we developed a system that could sense a child's verbal and non-verbal multimodal signals such as gaze, facial expression, proximity, and so on. Then, the observed information was used to train a model that is based on a Support Vector Machine (SVM) to estimate a human's attention level. We investigated the accuracy of the proposed method by comparing with a human judge's estimation, and obtained some promising results which we discuss here.

Original languageEnglish
Pages (from-to)1220-1228
Number of pages9
JournalBulletin of Electrical Engineering and Informatics
Volume9
Issue number3
DOIs
Publication statusPublished - Jun 2020

Keywords

  • Attention estimation
  • Child-robot interaction
  • Features extraction
  • Multimodal information

Fingerprint

Dive into the research topics of 'The study of attention estimation for child-robot interaction scenarios'. Together they form a unique fingerprint.

Cite this