Revisiting Dropout Regularization for Cross-Modality Person Re-Identification

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

This paper investigated targeted dropout regularization for improving convolutional neural network classifier performance on cross-modality person re-identification problems. The dropout regularization is carefully applied on specific layers (targeted dropout regularization) or regions of the features map (spatially targeted dropout regularization). The intuition behind the spatially targeted dropout regularization is that the importantness of feature map regions is not equally the same, and the specific object is more likely placed at the center. We experimented heavily with PKU-Sketch-ReID and multi-modality person re-identification datasets with SwinTransformer deep neural network architecture. Three different targeted dropout regularizations are used for the experiments, including block-wise dropout, horizontal block-wise dropout, and vertical-horizontal block-wise dropout. Experiments on three different sketch re-identification datasets show that the proposed spatially targeted dropout regularization can improve the performance of the deep neural network classifiers with the best rank-1 of 73.20% on the PKU-Sketch-ReID, 52.73% on the SYSU-MM01 dataset, and 72.74% on the RegDB dataset.

Original languageEnglish
Pages (from-to)102195-102209
Number of pages15
JournalIEEE Access
Volume10
DOIs
Publication statusPublished - 2022

Keywords

  • Spatially targeted dropout regularization
  • convolutional neural network
  • cross-modality data
  • person re-identification

Fingerprint

Dive into the research topics of 'Revisiting Dropout Regularization for Cross-Modality Person Re-Identification'. Together they form a unique fingerprint.

Cite this