AG-VPReID 2025: The 2nd Aerial-Ground Person ReID Challenge

AG-VPReID 2025 is the second installment of the Aerial-Ground Person Re-identification Challenge series, following the success of AG-ReID 2023. This competition addresses the critical challenge of matching individuals between aerial drone footage and ground-level cameras—a crucial capability for modern surveillance systems, security applications, and public safety operations. While the previous challenge (AG-ReID 2023) focused on image-based matching at relatively low altitudes (up to 45m), AG-VPReID 2025 significantly expands the scope to video-based person re-identification at extreme altitudes (80m-120m). This represents a major advancement in the field and introduces unprecedented challenges such as extreme viewpoint differences, significant scale variations, and complex temporal dynamics. The competition has been officially included as part of the IJCB 2025 conference.

***

This competition will have two parts:

Part 1: "Algorithms-Self-Tested" will require competitors to perform self-evaluation on a test dataset, which contains video-based data captured from high-altitude (80m-120m) drones and ground cameras. The dataset comprises 3,027 identities, 13,511 tracklets, and approximately 3.7 million frames, collected using two UAVs, two CCTV cameras, and two wearable cameras on a university campus. Additionally, 15 soft biometric traits for each identity are provided. For this part, overall mean Average Precision (mAP) is automatically calculated on the platform.

Part 2: "Algorithms-Independently-Tested" will focus on evaluating the solutions submitted by competitors. The evaluation process will be conducted by the organizers. Rank-1, Rank-5, Rank-10, and mAP metrics for each test case (Aerial-to-Ground and Ground-to-Aerial) will be computed by the hosts from uploaded solutions, and the result table will be updated accordingly.

Important Dates

March 3, 2025 (9am AEST): Dataset and baseline code released.
March 3 - June 2, 2025: Participants develop and submit their solutions.
June 3, 2025 (9pm AEST): Final submission deadline.
June 6, 2025: Announcement of top-performing teams.
June 10, 2025: Technical reports due from top-performing teams.
June 23, 2025: Competition summary paper submission to IJCB 2025.
September 2025: Presentation of results at IJCB 2025 conference.

AG-VPReID Dataset

Training Dataset

Identities Tracklets Frames
689 5,317 ~1.47M

Testing Dataset

Testing Case 1: Aerial → Ground

Set Identities Tracklets Frames
Query 645 3,023 424,532
Gallery 645 2,750 1,126,213

Testing Case 2: Ground → Aerial

Set Identities Tracklets Frames
Query 645 2,750 1,126,213
Gallery 645 3,023 424,532
Gallery Distractor 1,693 2,417 687,988

Data Structure

Each video tracklet follows the pattern: {ID}/{Tracklet}/{Frames}

***

Note: To download the dataset, first you need to register and log in to our Kaggle competition website.
Please read our privacy and usage policy before downloading the dataset.

How To Participate

  1. Set up an account on the competition platform: Register and join the AG-VPReID 2025 competition on Kaggle.
  2. Review the competition rules and guidelines: Carefully review the competition's rules and guidelines to understand expectations and submission requirements.
  3. Access the data: Download the competition dataset, including the training and testing sets, from the competition platform.
  4. Develop your model: Design and train your person re-identification model using the provided training data. You may apply preprocessing and feature engineering techniques to improve performance.
  5. Generate predictions: Use your trained model to generate predictions on the test data, formatting the results according to the specified submission guidelines.
  6. Submit your results: Upload your prediction files to the competition platform for evaluation. Include a brief description of your approach and any additional details.
  7. Monitor the leaderboard: After submission, you can view your ranking on the competition leaderboard and iteratively improve your model based on feedback until the submission limit is reached.
***

Baseline code will be made available through our GitHub repository.

Rules

General

  1. Your name in the leaderboard should be in the format "Surname Name (organization)."
  2. Participants must work on the problems individually.
  3. There is a limit of 10 submissions total for the entire competition.
  4. Each participant's best submission will be considered as their final result.
  5. Participants are encouraged to use the provided training data; however, they are allowed to use additional external data, provided that they disclose its usage upon submission. Algorithms trained on external data will be ranked separately.
  6. It is strictly prohibited for different accounts to use the same algorithm for submissions.
  7. Method description (one-page) must be submitted within a week after the competition ends if you want to be included in the summary paper submitted to IJCB 2025.

Competition-Specific Rules

  1. The competition focuses on aerial-ground video-based person re-identification using the AG-VPReID dataset.
  2. Submissions will be evaluated using mean Average Precision (mAP) and Cumulative Matching Characteristics (CMC) at Rank-1, Rank-5, and Rank-10.
  3. Participants should generate submission files according to the specified format, which includes predicted identity labels for each test video sequence.
  4. The person identity labels in the gallery set are hidden from participants, meaning participants will not know whether their algorithm has correctly identified the target person until they submit their results and receive evaluation feedback.

Organizers

Drexel University, USA

Dr. Feng Liu

Michigan State University, USA

Prof. Xiaoming Liu
Prof. Arun Ross

Department of Defence, Australia

Dr. Dana Michalski

Contact us if you have questions!


Acknowledgment

We acknowledge the contributors to the AG-ReID 2023 challenge whose feedback shaped this second edition. This webpage uses the LiveDet-Iris 2023 template, courtesy of Adam Czajka and team. We thank QUT's Research Engineering Facility (REF) for their expertise and infrastructure, and all volunteers who participated in data collection.