Recent advances in autonomous mobile robotics have enabled their deployment in a wide range of structured environments such as cities and warehouses where an abundance of manually labeled data is readily available to train existing deep learning algorithms. However, manual data annotation is financially prohibitive at large scales and also hinders the deployment of such algorithms in complex unstructured environments such as mines and burning buildings where labeled data is not readily available.
The goal of this workshop is to bring into spotlight, encourage, and analyze different robotics paradigms that can be leveraged to train models with limited supervision. Specifically, this workshop shall explore various works in the fields of self-supervised learning, zero-/few-shot/in-context learning, and transfer learning among others. Furthermore, this workshop also intends to investigate the use of rich feature representations generated by emergent vision foundation models such as DINO, CLIP, SAM, etc., to reduce or remove manual data annotation in existing training protocols. Through this workshop, we aim to provide a platform for researchers from various disciplines such as robotics, computer vision, and deep learning to exchange ideas and promote collaborations. Additionally, we desire to propel research in this exciting direction that would enable the large-scale deployment of autonomous robots in various facets of our day-to-day lives. This workshop will specifically aim to address the following core questions:
- What are the real-world limitations of largely relying on labeled data?
- What are the challenges of existing learning with limited supervision paradigms that prevent their widespread adoption in autonomous mobile robotics?
- Which research directions in computer vision and deep learning are beneficial for robotics, and which directions need significant reformulation?
- How can the robotics community better utilize various breakthroughs in machine learning and deep learning?