This page contains information on how to prepare your submission to participate in the JRDB Benchmark. We included the submission policy and rules, access to useful development kits and information on how to use them to develop and test your tracking and detection solutions on your computer, the criteria we use to evaluate tracking and detection submissions and the expected format of your submission file. For further questions, please contact us at jrdb@cs.stanford.edu.

Submission Policy

  • We strongly encourage all participants to use only the provided training data split to develop their algorithms (e.g. for the learning process and/or parameter tuning). The test data split should be used only to generate final results for a new submission to the challenge. Please, do not use the challenge submission system as way to tune your algorithm!
  • Important: We limit to THREE per month the number of submissions per account for each task. We only consider the latest submission per account for the leaderboard. It is STRICTLY PROHIBITED to create multiple accounts using different email addresses! We will actively monitor submissions and delete accounts violating these rules based on invalid or repeating supervisor and institution.
  • For both 2D and 3D tracking challenge, the participants may opt between using their own detector or using the detections we provide in the website.
  • Submissions to the challenge should, at least, be accompanied by a short abstract (up to 5000 characters) explaining the technical details of the method used.
  • Metadata can be edited after submission by clicking edit where previous submissions are displayed. Note that you can update metadata for up to 6 months, after which submissions become finalized. If a submission is still anonymous after 6 months, it will be deleted.
  • Currently, all tracking and detection submissions are evaluated on stitched images and not the individual images but participants are free to use all available data.
  • Development Kits

    We have modified and prepared some tools to work with the dataset and prepare your submission.
  • Detection Development Kit
  • The detection development kit has been adapted from Kitti to for the format of our dataset.
  • Tracking Development Kit
  • The tracking development kit is based on the MOT-Challenge development kit and handles the labels and format of our dataset.

    Criteria for the Evaluation

    We adopted the wide-established metrics and criteria from Kitti and MOT-Challenge. Details about the criteria can be found in the following document:
  • Criteria for the Evaluation and Information about Development Kit

  • Evaluation of Tracking: We will use MOTA to evaluate the performance of each tracking submission. However, we will also report number of switches (IDs), number of false positives (FP), number of misses (FN), and MOTP on the leaderboard. Additional metrics may be included later in the challenge.

    Evaluation of Detection: We will use precision to evaluate the performance of each detection submission. However, we will also report recall and AOS for 2D detection. Additional metrics may be included later in the challenge.

    Preparing Tracking Submissions

    Your submission will consist of a single zip file. The folder structure and content of this file (e.g. result files) have to comply with the MOT format described in:
      Milan, Anton, et al.
      "Mot16: A benchmark for multi-object tracking."
      arXiv preprint arXiv:1603.00831 (2016).
      https://motchallenge.net/

    Expected Directory Structure of 2D Tracking Submissions:
      <TEST_ROOT>/<SEQUENCE_1_IMAGE_NUM>.txt (e.g. cubberly-auditorium-2019-04-22_1_image_0.txt)
      <TEST_ROOT>/<SEQUENCE_2_IMAGE_NUM>.txt (e.g. discovery-walk-2019-02-28_0_image_2.txt)
      ...
    or
    Expected Directory Structure of 3D Tracking Submissions:
      <TEST_ROOT>/<SEQUENCE_1>.txt (e.g. cubberly-auditorium-2019-04-22_1.txt)
      <TEST_ROOT>/<SEQUENCE_2>.txt (e.g. discovery-walk-2019-02-28_0.txt)
      ...

    When preparing and evaluating your results on the training split on your own computer, the ground truth data should be structured in the following manner: Layout for ground truth data of 2D tracking (note IMAGE_NUM is left out for image_stitched):
      <GT_ROOT>/<SEQUENCE_1>/gt_<IMAGE_NUM>/gt.txt (e.g. cubberly-auditorium-2019-04-22_1/gt/gt.txt)
      <GT_ROOT>/<SEQUENCE_2>/gt_<IMAGE_NUM>/gt.txt (e.g. discovery-walk-2019-02-28_0/gt_image_2/gt.txt)
      ...
    Layout for 3D ground truth data:
      <GT_ROOT>/<SEQUENCE_1>/gt/3d_gt.txt (e.g. cubberly-auditorium-2019-04-22_1/gt/3d_gt.txt)
      <GT_ROOT>/<SEQUENCE_2>/gt/3d_gt.txt (e.g. discovery-walk-2019-02-28_0/gt/3d_gt.txt)
      ...

    During the evaluation, corresponding sequences of ground truth and test will be matched according to the `<SEQUENCE_X>` string.

    Preparing Detection Submissions

    Your submission will consist of a single zip file. The folder structure and content of this file (e.g. result files) have to comply with the KITTI format described in:
      Geiger, Andreas, Lenz, Philip, and Urtasun, Raquel.
      "Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite."
      2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2012.
      http://www.cvlibs.net/datasets/kitti/index.php

    Expected Directory Structure of 2D Detection Submissions:
      <TEST_ROOT>/<SEQUENCE_1>/<IMAGE_NUM>/frame.txt (e.g. cubberly-auditorium-2019-04-22_1/image_0/000000.txt)
      <TEST_ROOT>/<SEQUENCE_2>/<IMAGE_NUM>/frame.txt (e.g. discovery-walk-2019-02-28_0/image_2/000000.txt)
      ...
    or
    Expected Directory Structure of 3D Detection Submissions:
      <TEST_ROOT>/<SEQUENCE_1>/frame.txt (e.g. cubberly-auditorium-2019-04-22_1/000000.txt)
      <TEST_ROOT>/<SEQUENCE_2>/frame.txt (e.g. discovery-walk-2019-02-28_0/000000.txt)
      ...

    When preparing and evaluating your results on the training split on your own computer, the ground truth data should be structured in the following manner: Layout for ground truth data of 2D detection:
      <GT_ROOT>/<SEQUENCE_1>/frame.txt (e.g. cubberly-auditorium-2019-04-22_1/000000.txt)
      <GT_ROOT>/<SEQUENCE_2>/frame.txt (e.g. discovery-walk-2019-02-28_0/000000.txt)
      ...
    Layout for ground truth data of 3D detection:
      <GT_ROOT>/<SEQUENCE_1>/frame.txt (e.g. cubberly-auditorium-2019-04-22_1/000000.txt)
      <GT_ROOT>/<SEQUENCE_2>/frame.txt (e.g. discovery-walk-2019-02-28_0/000000.txt)
      ...

    During the evaluation, corresponding sequences of ground truth and test will be matched according to the `<SEQUENCE_X>` string.