- This paper start the first attempts to investigate the modality-agnostic person re-identification with the descriptive query.
- This paper introduces a novel unified person re-identification (UNIReID) architecture based on a dual-encoder to jointly integrate cross-modal and multi-modal task learning. With task-specific modality learning and task-aware dynamic training, UNIReID enhances generalization ability across tasks and domains.
- This paper contributes three multi-modal ReID datasets to support unified ReID evaluation.
Based on existing text-based datasets (CUHK-PEDES, ICFG-PEDES, and RSTPReid), we collect the sketches from photo modality to obtain multi-modality datasets (Tri-CUHK-PEDES, Tri-ICFG-PEDES, and Tri-RSTPReid). The collected sketches can be found in: https://pan.baidu.com/s/1c0h2utqisEx6OzGuoSaQhA (提取码: ndau) Google Drive(https://drive.google.com/file/d/12FIN-93Y4vXqVDVWLvLBwg3q0z0Vtwij/view?usp=sharing).
@inproceedings{chen2023towards, title={Towards Modality-Agnostic Person Re-identification with Descriptive Query}, author={Cuiqun Chen, Mang Ye, Ding Jiang}, booktitle={Conference on Computer Vision and Pattern Recognition 2023}, year={2023} }