IROS 2024 - Room 15, ADNEC, Abu Dhabi, UAE, 14th of October 2024 08:30

Brain over Brawn (BoB)

Workshop on Label Efficient Learning Paradigms for Autonomy at Scale

Workshop on Label Efficient Learning Paradigms for Autonomy at Scale, hosted at #IROS2024.

IROS conference site: https://iros2024-abudhabi.org/ (requires IROS registration)

000

Days

00

Hours

00

Minutes

00

Seconds



Overview

Workshop Details

Welcome to the IROS 2024 Workshop on Label Efficient Learning Paradigms for Autonomy at Scale!

Recent advances in autonomous mobile robotics have enabled their deployment in a wide range of structured environments such as cities and warehouses where an abundance of manually labeled data is readily available to train existing deep learning algorithms. However, manual data annotation is financially prohibitive at large scales and also hinders the deployment of such algorithms in complex unstructured environments such as mines and burning buildings where labeled data is not readily available.

The goal of this workshop is to bring into spotlight, encourage, and analyze different robotics paradigms that can be leveraged to train models with limited supervision. Specifically, this workshop shall explore various works in the fields of self-supervised learning, zero-/few-shot/in-context learning, and transfer learning among others. Furthermore, this workshop also intends to investigate the use of rich feature representations generated by emergent vision foundation models such as DINO, CLIP, SAM, etc., to reduce or remove manual data annotation in existing training protocols. Through this workshop, we aim to provide a platform for researchers from various disciplines such as robotics, computer vision, and deep learning to exchange ideas and promote collaborations. Additionally, we desire to propel research in this exciting direction that would enable the large-scale deployment of autonomous robots in various facets of our day-to-day lives. This workshop will specifically aim to address the following core questions:

  • What are the real-world limitations of largely relying on labeled data?
  • What are the challenges of existing learning with limited supervision paradigms that prevent their widespread adoption in autonomous mobile robotics?
  • Which research directions in computer vision and deep learning are beneficial for robotics, and which directions need significant reformulation?
  • How can the robotics community better utilize various breakthroughs in machine learning and deep learning?




Paper Submission

Call for Papers

📚 Topics



The goal of this workshop is to create a forum that enables the free exchange of ideas between experts from various communities such as robotics, computer vision, and representation among others. As a result, we invite both early-career as well as experienced researchers to submit work focusing on, but not limited to, the following topics:

  • Self-Supervised, Weakly-Supervised and Unsupervised Learning
  • Zero- and K-Shot Learning
  • Leveraging Vision Foundation Models for Data-Efficient Learning
  • Transfer Learning
  • Knowledge Distillation (Cross-Modal, Cross-Domain, Teacher-Students, etc.)
  • Domain Adaptation
  • Open World Learning
  • ...

📨 Submission



We invite you to submit high-quality research as a short paper (4 pages maximum). Page count exclude references (i.e., 4 + n). You are encouraged to use IROS's suggested paper template and upload a PDF. Aligned with IROS review guidelines, the review process will be single blind, i.e., the authors' names are not required to be anonymized. We encourage submissions of works-in-progress as well as recent works that are currently under review or have already been accepted elsewhere. Accepted papers will be made non-archival public through this website, and will be presented as posters during IROS2024 in Abu Dhabi, UAE, with a selected few in the spotlight lightning session.

Paper submissions will be handled with OpenReview through the following link:

BoB @ OpenReview

🥇 Awards



The three best posters presented during the workshop will be awarded with a physical GPU, sponsored by NVIDIA.*

*Competition winners shall need to pass checks for eligibility. Regrettably, competition winners will not be able to receive an NVIDIA sponsored prize if these checks are not passed.

Speakers

Invited Speakers

Luca Carlone

Associate Professor MIT USA

Fatma Güney

Assistant professor Koç University Türkiye

Peyman Moghadam

Research Scientist CSIRO Data61 Australia

Marija Popović

Assistant Professor TU Delft The Netherlands

Panelist

Invited Panelist

Ayoung Kim

Associate Professor SNU Korea

Michael Milford

Professor QUT Australia

Cyrill Stachniss

Full professor University of Bonn Germany

Organizers

Workshop Organizers

Nicholas Autio Mitchell

Senior Deep Learning Scientist NVIDIA Germany

Andrei Bursuc

Research Scientist Valeo France

Daniele Cattaneo

Research Group Leader University of Freiburg Germany

Hazel Doughty

Assistant Professor Leiden University Netherlands

Nikhil Gosala

PhD Student University of Freiburg Germany

Kürsat Petek

PhD Student University of Freiburg Germany

Katie Skinner

Assistant Professor University of Michigan USA

Andreea Tulbure

PhD Student ETH Zürich Switzerland

Abhinav Valada

Professor University of Freiburg Germany
Contact us at: bob-workshop-orga@googlegroups.com
Non-archival track:
Event Date
Submission Open 08 Jul, 2024
Submission 31 Aug, 2024 20 Sep, 2024 23:59 PST
Notification 15 Sep, 2024 30 Sep, 2024
Camera Ready 01 Oct, 2024 10 Oct, 2024
Workshop 14 Oct, 2024, 08:30

Program

Workshop Program

All invited talks, oral presentations and the panel discussion will take place in-person at IROS in Abu Dhabi, UAE with support for remote participation.

09:00

Keynote09:00 - 09:15

#

Opening Remarks

09:15

Keynote09:15 - 09:45

#

Task-driven Map representations

with Foundation Models

Modern tools for class-agnostic image segmentation (e.g., SegmentAnything) and open-set semantic understanding (e.g., CLIP) provide unprecedented opportunities for robot perception and mapping. While traditional closed-set metric-semantic maps were restricted to tens or hundreds of semantic classes, we can now build maps with a plethora of objects and countless semantic variations. This leaves us with a fundamental question: what is the right granularity for the objects (and, more generally, for the semantic concepts) the robot has to include in its map representation? This talk argues that the answer is intrinsically task-dependent and introduces a framework to connect the robot task ---specified in natural language--- with the map representation, such that the robot can build and retain sufficient information to complete the tasks.

09:45

Keynote09:45 - 10:15

#

A Comparison of Potential

Representations in Self-Driving

10:15

Break10:15 - 11:00

#

Coffee Break and Poster Session

11:00

Discussion11:00 - 11:45

#

Panel Discussion: Research2Reality

11:45

Keynote11:45 - 12:15

#

Learning Generalizable Feature

Fields for Mobile Manipulation

12:15

Keynote12:15 - 12:45

#

Robot Planning for Active Learning

Perceiving and understanding complex environments is a key requirement for full robot autonomy. However, large-scale robotic deployments are limited by the prohibitive cost and difficulty of manual data annotation to train deep learning models. This talk will discuss approaches for using autonomous robots to autonomously collect useful data for model training. We present a general planning framework guiding a robot to areas of informative training data by incorporating principles of active learning and semi-supervised learning. Our experiments show that planning for training data collection in such a way maximises model performance while drastically reducing labelling effort, exposing the benefit of using robots as tools for training data collection.

12:45

Keynote12:45 - 13:00

#

Closing Remarks

Website template adapted from RoboNerF