Awesome reID
LeaderBoard
If you notice any result or the public code that has not been included in this table, please connect Zhedong Zheng without hesitation to add the method. You are welcomed! or create pull request.
Priorities are given to papers whose codes are published.
Code
The 1st Place Submission to AICity Challenge 2020 re-id track [code] [paper]
Drone-based building re-id [code] [paper]
Supervised Learning
Train and Test on DukeMTMC-reID
Methods | Rank@1 | mAP | Reference |
---|---|---|---|
BoW+kissme | 25.13% | 12.17% | “Scalable person re-identification: a benchmark”, Liang Zheng, Liyue Shen, Lu Tian, Shengjin Wang, Jingdong Wang and Qi Tian, ICCV 2015 [project] |
LOMO+XQDA | 30.75% | 17.04% | “Person Re-identification by Local Maximal Occurrence Representation and Metric Learning”, Shengcai Liao, Yang Hu, Xiangyu Zhu and Stan Z Li, CVPR 2015 [project] |
Basel. | 65.22% | 44.99% | “Person Re-identification: Past, Present and Future”, Liang Zheng, Yi Yang, and Alexander G. Hauptmann, arXiv:1610.02984 [code] |
Basel. + LSRO | 67.68% | 47.13% | “Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in vitro”, Zhedong Zheng, Liang Zheng and Yi Yang, ICCV 2017 [code] |
Basel. + OIM | 68.1% | - | “Joint Detection and Identification Feature Learning for Person Search”, Tong Xiao, Shuang Li, Bochao Wang, Liang Lin, Xiaogang Wang, CVPR 2017 |
Verif + Identif | 68.9% | 49.3% | “A Discriminatively Learned Cnn Embedding for Person Re-identification”, Zhedong Zheng, Liang Zheng, and Yi Yang, TOMM 2017. [code] |
APR | 70.69% | 51.88% | “Improving person re-identification by attribute and identity learning”, Yutian Lin, Liang Zheng, Zhedong Zheng, Yu Wu, Yi Yang, Pattern Recognition 2019 [Attribute Dataset] |
ACRN | 72.58% | 51.96% | “Person Re-Identification by Deep Learning Attribute-Complementary Information”, Arne Schumann and Rainer Stiefelhagen, CVPR 2017 Workshop |
PAN | 71.59% | 51.51% | “Pedestrian Alignment Network for Large-scale Person Re-identification”, Zhedong Zheng, Liang Zheng, Yi Yang, TCSVT 2018 [code] |
PAN+rerank | 75.94% | 66.74% | |
FMN | 74.51% | 56.88% | “Let Features Decide for Themselves: Feature Mask Network for Person Re-identification”, Guodong Ding, Salman Khan, Zhenmin Tang, Fatih Porikli, arXiv:1711.07155 |
FMN+rerank | 79.52% | 72.79% | |
Bilinear Coding | 76.2% | 56.9% | “Weighted Bilinear Coding over Salient Body Parts for Person Re-identification” Zhigang Chang, Zhou Qin, Heng Fan, Hang Su, Hua Yang, Shibao Zheng, and Haibin Ling, Neurocomputing |
SVDNet | 76.7% | 56.8% | “SVDNet for Pedestrian Retrieval”, Yifan Sun, Liang Zheng, Weijian Deng, Shengjin Wang, ICCV 2017 [code] |
OG-Net | 76.93% | 57.20% | “Parameter-Efficient Person Re-identification in the 3D Space”, Zhedong Zheng and Yi Yang, TNNLS 2022. [pytorch code] |
dMpRL | 76.81% | 58.56% | “Multi-pseudo Regularized Label for Generated Samples in Person Re-Identification”, Huang Yan, Jinsong Xu, Qiang Wu, Zhedong Zheng, Zhaoxiang Zhang, and Jian Zhang, TIP 2018 [code] |
AACN | 76.84% | 59.25% | “Attention-Aware Compositional Network for Person Re-identification”, Jing Xu, Rui Zhao, Feng Zhu, Huaming Wang and Wanli Ouyang, CVPR2018 |
CamStyle + RE | 78.32% | 57.61% | “Camera Style Adaptation for Person Re-identification”, Zhun Zhong, Liang Zheng, Zhedong Zheng, Shaozi Li, Yi Yang, CVPR 2018 [code] |
DPFL | 79.2% | 60.6% | “Person Re-Identification by Deep Learning Multi-Scale Representations”, Yanbei Chen, Xiatian Zhu and Shaogang Gong, ICCV2017 workshop |
SVDNet + RE | 79.31% | 62.44% | “Random Erasing Data Augmentation”, Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, Yi Yang, AAAI 2020 |
SVDNet + RE + rerank | 84.02% | 78.28% | |
PSE | 79.8% | 62.0% | “A Pose-Sensitive Embedding for Person Re-Identification with Expanded Cross Neighborhood Re-Ranking”, M. Saquib Sarfraz, Arne Schumann, Andreas Eberle, Rainer Stiefelhagen, CVPR 2018[code] |
PSE + ECN + rerank | 85.2% | 79.8% | |
ATWL(2-stream) | 79.80% | 63.40% | “Features for Multi-Target Multi-Camera Tracking and Re-Identification”, Ergys Ristani and Carlo Tomasi, CVPR 2018 |
Mid-level Representation | 80.43% | 63.88% | “The Devil is in the Middle: Exploiting Mid-level Representations for Cross-Domain Instance Matching”, Qian Yu, Xiaobin Chang, Yi-Zhe Song, Tao Xiang, Timothy M. Hospedales, arXiv:1711.08106 |
HA-CNN | 80.5% | 63.8% | “Harmonious Attention Network for Person Re-Identification”, Li Wei, Xiatian Zhu, and Shaogang Gong, CVPR 2018 |
Deep-Person | 80.90% | 64.80% | “Deep-Person: Learning Discriminative Deep Features for Person Re-Identification”, Xiang Bai, Mingkun Yang, Tengteng Huang, Zhiyong Dou, Rui Yu, Yongchao Xu, arXiv:1711.10658 |
MLFN | 81.2% | 62.8% | “Multi-Level Factorisation Net for Person Re-Identification” Xiaobin Chang, Timothy M. Hospedales, and Tao Xiang, CVPR 2018. |
DuATM (Dense-121) | 81.82% | 64.58% | “Dual Attention Matching Network for Context-Aware Feature Sequence based Person Re-Identification”, Jianlou Si, Honggang Zhang, Chun-Guang Li, Jason Kuen, Xiangfei Kong, Alex C. Kot, Gang Wang, CVPR 2018 |
PCB | 83.3% | 69.2% | “Beyond Part Models: Person Retrieval with Refined Part Pooling”, Yifan Sun, Liang Zheng, Yi Yang, Qi Tian, Shengjin Wang, ECCV 2018 |
Part-aligned(Inception V1, OpenPose) | 84.4% | 69.3% | “Part-Aligned Bilinear Representations for Person Re-identification”, Yumin Suh, Jingdong Wang, Siyu Tang, Tao Mei, Kyoung Mu Lee, ECCV 2018 |
GP-reID | 85.2% | 72.8% | “Re-ID done right: towards good practices for person re-identification”, Jon Almazan, Bojana Gajic, Naila Murray, Diane Larlus, arXiv:1801.05339 |
SPreID (Res-152) | 85.95% | 73.34% | “Human Semantic Parsing for Person Re-identification”, Kalayeh, Mahdi M., Emrah Basaran, Muhittin Gokmen, Mustafa E. Kamasak, and Mubarak Shah, CVPR 2018 |
DG-Net (Res-50) | 86.6% | 74.8% | “Joint Discriminative and Generative Learning for Person Re-identification”, Zhedong Zheng, Xiaodong Yang, Zhiding Yu, Liang Zheng, Yi Yang and Jan Kautz, CVPR 2019. [code] |
MGN | 88.7% | 78.4% | “Learning Discriminative Features with Multiple Granularities for Person Re-Identification” Wang, Guanshuo, Yufeng Yuan, Xiong Chen, Jiwei Li, and Xi Zhou. ACM MM 2018. |
Transfer Learning
- Train on Market-1501, Test on DukeMTMC-reID
The primary motivation is that collecting ID annotation is relatively-expensive in human resource and time cost.
Is it possible to use less annotation on the unseen dataset, especially ID labels?
Methods | Use DukeMTMC Training Data (without ID label but may use the camera ID) | Rank@1 | mAP | Reference |
---|---|---|---|---|
UMDL | 18.5% | 7.3% | “Unsupervised cross-dataset transfer learning for person re-identification”, Peng Peixi, Tao Xiang, Yaowei Wang, Massimiliano Pontil, Shaogang Gong, Tiejun Huang, and Yonghong Tian, CVPR 2016 | |
Verif + Identif | 25.7% | 12.8% | “A Discriminatively Learned Cnn Embedding for Person Re-identification”, Zhedong Zheng, Liang Zheng, and Yi Yang, TOMM 2017. [pytorch code] | |
PUL | 30.4% | 16.8% | “Unsupervised Person Re-identification: Clustering and Fine-tuning”, Hehe Fan, Liang Zheng, Yi Yang, TOMM2018 [code] | |
PN-GAN | 29.9% | 15.8% | “Pose-Normalized Image Generation for Person Re-identification” Xuelin Qian, Yanwei Fu, Tao Xiang, Wenxuan Wang, Jie Qiu, Yang Wu, Yu-Gang Jiang, Xiangyang Xue, ECCV 2018 | |
OG-Net | 31.3% | 16.3% | “Parameter-Efficient Person Re-identification in the 3D Space”, Zhedong Zheng and Yi Yang, TNNLS 2022. [pytorch code] | |
SPGAN | 41.4% | 22.3% | “Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification”, Weijian Deng, Liang Zheng, Guoliang Kang, Yi Yang, Qixiang Ye, Jianbin Jiao, CVPR 2018 | |
TJ-AIDL | 44.3% | 23.0% | “Transferable Joint Attribute-Identity Deep Learning for Unsupervised Person Re-Identification”, Jingya Wang, Xiatian Zhu, Shaogang Gong, Wei Li, ECCV 2018 | |
MMFA | 45.3% | 24.7% | “Multi-task Mid-level Feature Alignment Network for Unsupervised Cross-Dataset Person Re-Identification”, Shan Lin, Haoliang Li, Chang-Tsun Li, Alex Chichung Kot, BMVC 2018 | |
DG-Net | 43.5% | 25.4% | “Joint Discriminative and Generative Learning for Person Re-identification”, Zhedong Zheng, Xiaodong Yang, Zhiding Yu, Liang Zheng, Yi Yang and Jan Kautz, CVPR 2019. (Results are in Appendix) | |
SPGAN+LMP | 46.4% | 26.2% | ||
HHL | 46.9% | 27.2% | “Generalizing A Person Retrieval Model Hetero- and Homogeneously”, Zhun Zhong, Liang Zheng, Shaozi Li, Yi Yang, ECCV 2018 | |
BUC | 47.4% | 27.5% | “A Bottom-up Clustering Approach to Unsupervised Person Re-identification”, Yutian Lin, Xuanyi Dong, Liang Zheng,Yan Yan, Yi Yang, AAAI 2018 | |
CFSM | 49.8% | 27.3% | “Disjoint Label Space Transfer Learning with Common Factorised Space”, Xiaobin Chang, Yongxin Yang, Tao Xiang, Timothy M. Hospedales, AAAI 2019 | |
ARN | 60.2% | 33.4% | “Adaptation and Re-Identification Network: An Unsupervised Deep Transfer Learning Approach to Person Re-Identification”, Yu-Jhe Li, Fu-En Yang, Yen-Cheng Liu, Yu-Ying Yeh, Xiaofei Du, and Yu-Chiang Frank Wang, CVPR 2018 Workshop | |
TAUDL | 61.7% | 43.5% | “Unsupervised Person Re-identification by Deep Learning Tracklet Association”, Minxian Li, Xiatian Zhu, and Shaogang Gong, ECCV 2018 | |
UDARTP | 68.4% | 49.0% | “Unsupervised Domain Adaptive Re-Identification: Theory and Practice”, Liangchen Song, Cheng Wang, Lefei Zhang, Bo Du, Qian Zhang, Chang Huang, and Xinggang Wang, arXiv:1807.11334 | |
PCB-PAST | 72.4% | 54.3% | “Self-training with progressive augmentation for unsupervised cross-domain person re-identification” Xinyu Zhang, Jiewei Cao, Chunhua Shen, and Mingyu You. ICCV 2019 | |
SSG | 73.0% | 53.4% | “Self-similarity grouping: A simple unsupervised cross domain adaptation approach for person re-identification” Yang Fu, Yunchao Wei, Guanshuo Wang, Yuqian Zhou, Honghui Shi, and Thomas S Huang. ICCV 2019 | |
MMCL | 72.4% | 54.5% | “Unsupervised Person Re-identification via Multi-label Classification” Dongkai Wang and Shiliang Zhang. CVPR 2020. | |
AD-Cluster | 72.6% | 54.1% | “AD-Cluster: Augmented Discriminative Clustering for Domain Adaptive Person Re-identification” Yunpeng Zhai, Shijian Lu, Qixiang Ye, Xuebo Shan, Jie Chen, Rongrong Ji, and Yonghong Tian. CVPR 2020 | |
B-SNR+GDS-H | 76.7% | 59.7% | “Global Distance-distributions Separation for Unsupervised Person Re-identification” Xin Jin, Cuiling Lan, Wenjun Zeng, Zhibo Chen. ECCV 2020 | |
NRMT | 77.8% | 62.2% | “Unsupervised domain adaptation with noise resistible mutual-training for person re-identification” Fang Zhao, Shengcai Liao, Guo-Sen Xie, Jian Zhao, Kaihao Zhang, and Ling Shao. ECCV 2020 | |
DAAM | 77.6% | 63.9% | “Domain Adaptive Attention Model for Unsupervised Cross-Domain Person Re-Identification” Yangru Huang, Peixi Peng, Yi Jin, Junliang Xing, Congyan Lang, Songhe Feng. AAAI 2020 | |
MMT | 78.0% | 65.1% | “Mutual Mean-Teaching: Pseudo Label Refinery for Unsupervised Domain Adaptation on Person Re-identification” Yixiao Ge, Dapeng Chen, Hongsheng Li. ICLR 2020 | |
DGNet++ | 78.9% | 63.8% | “Joint disentangling and adaptation for cross-domain person re-identification” Yang Zou, Xiaodong Yang, Zhiding Yu, B.V.K. Vijaya Kumar, Jan Kautz. ECCV20 | |
MEB-Net | 79.6% | 66.1% | “Multiple expert brainstorming for domain adaptive person re-identification” Yunpeng Zhai, Qixiang Ye, Shijian Lu, Mengxi Jia, Rongrong Ji, Yonghong Tian. ECCV 2020 | |
UNRN | 82.0% | 69.1% | “Exploiting Sample Uncertainty for Domain Adaptive Person Re-Identification” Kecheng Zheng, Cuiling Lan, Wenjun Zeng, Zhizheng Zhang, and Zheng-Jun Zha. AAAI 2021 | |
Cluster Contrast + GEM | 86.8% | 76.0% | “Cluster Contrast for Unsupervised Person Re-Identification” Dai, Zuozhuo and Wang, Guangyuan and Zhu, Siyu and Yuan, Weihao and Tan, Ping. arXiv 2021 |
Train on MSMT17, Test on DukeMTMC-reID
Methods | Use DukeMTMC Training Data (without ID label but may use the camera ID) | Rank@1 | mAP | Reference |
---|---|---|---|---|
Verif + Identif | 48.7% | 27.5% | “A Discriminatively Learned Cnn Embedding for Person Re-identification”, Zhedong Zheng, Liang Zheng, and Yi Yang, TOMM 2017. [pytorch code] | |
OG-Net | 31.3% | 16.3% | “Parameter-Efficient Person Re-identification in the 3D Space”, Zhedong Zheng and Yi Yang, TNNLS 2022. [pytorch code] | |
DG-Net | 62.0% | 40.7% | “Joint Discriminative and Generative Learning for Person Re-identification”, Zhedong Zheng, Xiaodong Yang, Zhiding Yu, Liang Zheng, Yi Yang and Jan Kautz, CVPR 2019. [code] (Results are in Appendix) | |
MAR | 67.1% | 48.0% | “Unsupervised Person Re-identification by Soft Multilabel Learning”, Hong-Xing Yu, Wei-Shi Zheng, Ancong Wu, Xiaowei Guo, Shaogang Gong, Jian-Huang Lai, CVPR 2019. | |
UDARTP | 75.0% | 57.1% | “Unsupervised Domain Adaptive Re-Identification: Theory and Practice”, Liangchen Song, Cheng Wang, Lefei Zhang, Bo Du, Qian Zhang, Chang Huang, and Xinggang Wang, arXiv:1807.11334 |
Train on MSMT17, Test on Market
Methods | Use Market Training Data (without ID label but may use the camera ID) | Rank@1 | mAP | Reference |
---|---|---|---|---|
OG-Net | 40.1% | 17.6% | “Parameter-Efficient Person Re-identification in the 3D Space”, Zhedong Zheng and Yi Yang, TNNLS 2022. [pytorch code] | |
DG-Net | 61.8% | 33.6% | “Joint Discriminative and Generative Learning for Person Re-identification”, Zhedong Zheng, Xiaodong Yang, Zhiding Yu, Liang Zheng, Yi Yang and Jan Kautz, CVPR 2019. [code] (Results are in Appendix) |
DukeMTMC-reID Protocol Citation
@inproceedings{zheng2017unlabeled,
title={Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in vitro},
author={Zheng, Zhedong and Zheng, Liang and Yang, Yi},
booktitle={Proceedings of the IEEE International Conference on Computer Vision},
year={2017}
}
@inproceedings{ristani2016MTMC,
title = {Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking},
author = {Ristani, Ergys and Solera, Francesco and Zou, Roger and Cucchiara, Rita and Tomasi, Carlo},
booktitle = {European Conference on Computer Vision workshop on Benchmarking Multi-Target Tracking},
year = {2016}
}