Morphological Preprocessing for Low-Resolution Face Recognition using Common Space

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Citations (Scopus)

Abstract

There are many researches on face recognition, but most have not produced satisfactory results on very low-resolution images. This study proposes the use of morphological preprocessing to improve the performance of common space approach for face recognition on low-resolution images. The morphological preprocessing consists of Top-Hat and Bottom-Hat Transformations, which capable of extracting small elements and handling uneven lighting on images. The k-Nearest Neighbor is used to recognize the face by measuring the distance of deep CNN features of low and high-resolution images in the common space. Experiment on the Yale Face dataset shows that the use of Morphological Preprocessing can increase the face recognition accuracy by 14.59%, 1.00%, and 2.50% for low-resolution images with sizes 24x24, 36x35, and 56x56, respectively.

Original languageEnglish
Title of host publication8th International Conference on ICT for Smart Society
Subtitle of host publicationDigital Twin for Smart Society, ICISS 2021 - Proceeding
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781665416979
DOIs
Publication statusPublished - 2 Aug 2021
Event8th International Conference on ICT for Smart Society, ICISS 2021 - Virtual, Bandung, Indonesia
Duration: 2 Aug 20214 Aug 2021

Publication series

Name8th International Conference on ICT for Smart Society: Digital Twin for Smart Society, ICISS 2021 - Proceeding

Conference

Conference8th International Conference on ICT for Smart Society, ICISS 2021
Country/TerritoryIndonesia
CityVirtual, Bandung
Period2/08/214/08/21

Keywords

  • face recognition
  • low-resolution
  • morphological
  • preprocessing

Fingerprint

Dive into the research topics of 'Morphological Preprocessing for Low-Resolution Face Recognition using Common Space'. Together they form a unique fingerprint.

Cite this