Toward facial re-identification: Experiments with data from an operational surveillance camera plant

Abstract

Person re-identification (ReID) is a popular topic of research. Almost all existing ReID approaches employ local and global body features (e.g., clothing color and pattern, body symmetry, etc.). These `body ReID' methods implicitly assume that facial resolution is too low to aid in the ReID process. We assert that faces, even when captured in low resolution environments, may contain unique and stable features for ReID. Such `facial ReID' approaches are relatively unexplored in the literature. In this work, we explore facial ReID using a new dataset that was collected from a real surveillance network in a municipal rapid transit system. It is a challenging ReID dataset, as it includes intentional changes in persons' appearances over time. We conduct multiple experiments on this dataset, exploiting deep neural networks to extract dense, low resolution facial features to boost matching stability. We conclude that in cases where pedestrian appearance changes, low resolution faces can be utilized to improve ReID matching performance.

Topics

11 Figures and Tables

Download Full PDF Version (Non-Commercial Use)