MORE 2025
ACM Web Conference
Workshop on
Multimedia Object Re-ID: Advancements, Challenges, and Opportunities (MORE 2025)
The accept papers will be published at ACM Web Workshop (top 50%), and go through the same peer review process as the regular papers. Several authors will be invited to do a oral presentation.
[Accepted Workshop Proposal] [Submission Site]
News
- Challenge site is online.
- Paper submission site is online.
- Workshop site is online.
Abstract
Object re-identification (or object re-ID) has gained significant attention in recent years, fueled by the increasing demand for advanced video analysis and safety systems. In object re-identification, a query can be of different modalities, such as an image, a video, or natural language, containing or describing the object of interest. This workshop aims to bring together researchers, practitioners, and enthusiasts interested in object re-identification to delve into the latest advancements, challenges, and opportunities in this dynamic field. The workshop covers a spectrum of topics related to object re-identification, including but not limited to deep metric learning, multi-view data generation, video-based object re-identification, cross-domain object re-identification and real-world applications. The workshop provides a platform for researchers to showcase their work, exchange ideas, and foster potential collaborations. Additionally, it serves as a valuable opportunity for practitioners to stay abreast of the latest developments in object re-identification technology. Overall, this workshop creates a unique space to explore the rapidly evolving field of object re-identification and its profound impact on advancing the capabilities of multimedia analysis and retrieval.
Key Words Multimedia Retrieval, Object Re-identification, Representation Learning, Deep Metric Learning, Multi-view Generation
The list of possible topics includes, but is not limited to:
- New Datasets and Benchmarks
- Deep Metric Learning
- Multi-view Data Generation
- Video-based Object Re-identification
- Cross-domain Object Re-identification
- Object Re-identification Domain Adaptation / Generalization
- Single/ Multiple Object Tracking
- Object Geo-localization
- Multimedia Re-ranking
Submission
Submission Site is at OpenReview
Submission template can be found at ACM or you may directly follow the overleaf template.
Submission Type
(1). Original papers (up to 4 pages in length, plus unlimited pages for references): original solution to the tasks in the scope of workshop topics and themes.
(2). Challenge papers (up to 4 pages in length, plus unlimited pages for references): winning papers for our challenge;
(3). Survey papers (up to 4 pages in length, plus unlimited pages for references): papers summarizing existing publications in leading conferences and high-impact journals that are relevant for the topic of the workshop; Page limits include diagrams and appendices.
Submissions should be single-blind due to limited publication time, written in English, and formatted according to the current ACM two-column conference format. Suitable LaTeX, Word, and Overleaf templates are available from the ACM Website (use “sigconf” proceedings template for LaTeX and the Interim Template for Word).
Important Dates
Submission of papers:
- Workshop Paper Submission Start: 27 Nov, 2024
- Challenge Start: 1 Dec, 2024
- Challenge End: 14 Dec, 2024
- Workshop Paper Submission End: 18 Dec, 2024
- Workshop Papers Notification: 23 Dec, 2024
- Camera-ready Submission: 25 Dec, 2024
- Workshop Dates: 28-29 Apr, 2025
Please note: The submission deadline is at 11:59 p.m. of the stated deadline date Anywhere on Earth
Challenge Overview
Website: https://codalab.lisn.upsaclay.fr/competitions/21001
We also provide a challenging text-based person anomaly search dataset, called PAB, and the workshop audience may consider to participate the competition. The motivation is to locating the pedestrian of interest engaged in either normal or anomaly actions that we usually face an extremely large person image pool. In particular, PAB is a large-scale image-text Pedestrian Anomaly Behavior (PAB) benchmark, featuring a broad spectrum of actions, e.g., running, performing, playing soccer, and the corresponding anomalies, e.g., lying, being hit, and falling of the same identity. We will release PAB on our website, and make a public leader board. The training set of PAB comprises 1,013,605 synthesized image-text pairs of both normalities and anomalies, while the test set includes 1,978 real-world image-text pairs. In our primary evaluation, text-based person anomaly search is challenging and demands a finer-grained understanding of both the pedestrian’s appearance and behavior. Our baseline model achieves 69.92% Recall@1, 95.60% Recall@5, and 97.32% Recall@10 accuracy. We hope more audiences can be involved to solve this challenge, and may also consider the efficiency problem against a large candidate pool.
Check challenge details at https://arxiv.org/pdf/2411.17776
The challenge dataset contains two part.
-
The basic dataset (training set) can be download from Onedrive.
-
The name-masked test-PAB dataset (query & gallery) can be downloaded from the same Onedrive.
The submission example can be found at Baseline Submission. Please zip it as ``answer.zip’’ to submit the result.
Please return the top-10 person image names. For example, the first query index is LJBPICLSHG7YHW5''. Therefore, the first line of returned result in
answer.txt’’ should be the format as follows from Rank-1 to Rank-10:
Tips:
- For privacy protection, please blur faces in the published materials (such as paper, video, poster, etc.)
- For social good, please do not contain any misleading words, such as
surveillance
andsecret
.
Organizing Team
Yaxiong Wang, Hefei University of Technology, China | Yunzhong Hou, Australian National University, Australia | Shuyu Yang, Xi’an Jiaotong University, China |
Zhedong Zheng, University of Macau, China | Zhun Zhong, University of Nottingham, United Kingdom | Liang Zheng, Australian National University, Australia |
Workshop Citation
@inproceedings{wang2025MORE,
title={MORE'25 Multimedia Object Re-ID: Advancements, Challenges, and Opportunities},
author={Wang, Yaxiong and Hou, Yunzhong and Yang, Shuyu and Zheng, Zhedong and Zhong, Zhun and Zheng, Liang},
booktitle={ACM Web Conference Workshop},
year={2025}
}