This is the second workshop from the JRDB workshop series, tailored to many perceptual problems for an autonomous robot to operate, interact and navigate in human environments. These perception tasks include any 2D or 3D visual scene understating problem as well as any other problems pertinent to human action, intention and social behaviour understanding such as 2D-3D human detection, tracking and forecasting, 2D-3D human body skeleton pose estimation, tracking and forecasting and human social grouping and activity recognition.
Recently, the community has paid increasing attention to the human activity understanding problem due to the availability of several large-scale annotated datasets for this computer vision and robotics task. However, the existing datasets for this problem are often collected from platforms such as Youtube and are only limited to 2D annotations for individual actions and activities annotations. The main focus of our CVPR workshop is the novel problem of social human activity understanding, consisting of three sub-tasks such as individual action detection, social group identification and social activity recognition. We also introduce JRDB-Act, a large-scale, ego-centric and multi-modal dataset.
JRDB dataset contains 67 minutes of the annotated sensory data acquired from the JackRabbot mobile manipulator and includes 54 indoor and outdoor sequences in a university campus environment. The sensory data includes a stereo RGB 360° cylindrical video stream, 3D point clouds from two LiDAR sensors, audio and GPS positions. In addition to the current 2D-3D person detection and tracking, we will release a new set of annotations for this dataset such as 1) individual actions, 2) human social group formation, and 3) social activity of each social group. Using these unique annotations, we would launch two new benchmarks and challenges for this workshop. We also have, as invited speakers, world-renowned experts in the field of visual perceptions for understanding human action, intention and social behaviour. Finally, we aim to foster discussion between the attendants to find useful synergies and applications of the solutions of these (or similar) perceptual tasks.
The currently available annotations on the JackRobbot dataset and benchmark (JRDB) include:
In addition to the above, we have provided a new set of annotations, including:
We invite researchers to submit their papers addressing topics related to autonomous (robot) navigation in human environments. Relevant topics include, but not limited to:
Submissions could follow the CVPR format (4-8 double-column pages excluding references) or extended abstract (1 page, double-column excluding references). Accepted papers have the opportunity to be presented as a poster during the workshop. However, only papers in CVPR format will appear in the proceedings. By submitting to this workshop, the authors agree to the review process and understand that we will do our best to match papers to the best possible reviewers. The reviewing process is double-blind. Submission to the challenge is independent of the paper submission, but we encourage the authors to submit to one of the challenges.
Submissions can be made here. If you have any questions about submitting, please contact us here.