Skip to content

Latest commit

 

History

History
52 lines (35 loc) · 3.05 KB

README.md

File metadata and controls

52 lines (35 loc) · 3.05 KB

Person-Re-Identification

For making sense of the vast quantity of visual data generated by the rapid expansion of large-scale distributed multi-camera systems, automated person re-identification is essential. Person re-identification, a tool used in intelligent video surveillance, is the task of correctly identifying individuals across multiple images captured under varied scenarios from multiple cameras. Solving this problem is inherently a challenging one because of the issues posed to it by low-resolution images, illumination changes per image, unconstrained pose and occlusions.

In this project, we aim at developing a Person re-identification model using Deep Neural Networks (DNN) which can handle variable size input images. Specifically, we aim at implementing two preprocessing techniques, which reduce the chances of overfitting i.e. we aim to make our model robust to occlusion using Random Erasing, a data augmentation technique, and reduce the influence of pose variations on features using a Pose normalized Generative Adversarial Network (GAN).

Along with this we also aim to implement and integrate Part-based Convolutional Baseline (PCB) to further improve on the results. We briefly describe the models trained along with their evaluation results on Market1501 dataset and provided validation and test sets.

Baseline used

https://github.com/KaiyangZhou/deep-person-reid

Pose-Normalization

https://github.com/naiq/PN_GAN.git

Paper Link : http://openaccess.thecvf.com/content_ECCV_2018/papers/Xuelin_Qian_Pose-Normalized_Image_Generation_ECCV_2018_paper.pdf

READ THIS FOR MORE INFORMATION : Pose Normalized Training

Model-Outline

Training Block Schematic

GAN result

Output after 10th epoch
Result after 10 epochs

Output after 12th epoch
Result after 12 epochs

Random Erasing Data Augmentation

https://github.com/zhunzhong07/Random-Erasing

Paper Link : https://arxiv.org/abs/1708.04896

Link to the trained model(only erasing)

https://drive.google.com/open?id=1Gm7hpF3HoG2Xt0WV92Wi07U428BH8B1q

Links to the feature mat files(according to market1501)

Query: https://drive.google.com/open?id=1StnqZt9MOqiyUYnf_RfhBGXHWQiFgLpz

Gallery: https://drive.google.com/open?id=1jeoQyxqtRW07M1Shbe4pt9Aw3e-m1icY

Result feature .mat files

The extracted features on the test Set are in the folder Result_mat, where the three result set feature files (feature_test_query.mat and feature_test_gallery.mat) are in Result_1, Result_2, Result_3 respectively.

License

This project is licensed under the MIT License - see the LICENSE file for details