Overview
This workshop is meant to be a platform of exchange between the three communities of computer vision, machine learning, and robotics. We want encourage them to find feasible solutions that bridge the gap between stand-alone perception and robotic related tasks such as motion or assembly planning, visual servoing and grasping. A main topic is how sensing, manipulation and planning can be combined to yield mutual benefits. We also search for scalable learning-based approaches that require little supervision and examine them on their benefits and limitations. This can include learning in simulation, transfer and few-shot learning, automatic labeling or reinforcement learning. Are end-to-end learning approaches really the right way to go or are modular pipelines still preferable due to better introspection? Are current subtask metrics suitable indicators for execution success? What is necessary to address the needs of end-user applications in terms of scalability, robustness, runtime, cost, maintainability and fail-safety?
Invited Speakers
- Maxim Likhachev, Robotics Institute Carnegie Mellon University
Offline Learning for Online Planning
In manufacturing and automation settings, robots often have to perform complex yet repetitive manipulation tasks. Furthermore, in many cases, for example, a robot operating at a moving conveyor, robots have very limited time to decide what action to execute next and how to do it, independently of the complexity of a planning problem. In this talk, I will describe some of our research efforts towards the use of offline learning to ensure that online planning is fast and robust enough for such problems. Specifically, in the first part of the talk, I will present an offline pre-processing method that provides a provably constant-time online planning for repetitive planning tasks in static environments. In the second part of the talk, I will describe our approach to learning from offline simulation-based planning for online decision-making under significant uncertainty in the model and environment. I will use mobile manipulation tasks to illustrate the described approaches. - Renaud Detry, NASA JPL
Combining Semantic and Geometric Scene Understanding: From Robot Manipulation to Planetary Exploration
- Sergey Levin, UC Berkeley, EECS
Data-Driven Robotic Reinforcement Learning
The ability of machine learning systems to generalize to new situations is determined in large part by the availability of large and diverse training sets. In robotics, it is often thought that large datasets are difficult to obtain, and therefore we need alternative methods that can handle small datasets. In this talk, I will discuss how in fact robots should be better suited for large-data training regimes than supervised learning systems, since they do not require humans to manually provide labels for the data. I will discuss how effective robotic learning requires removing the barriers to data-driven improvement from every part of the learning pipeline, from task specification, to data collection, to off-policy reinforcement learning, and present initial results that study each of these problems. - Leonel Rozo, Bosch Center for Artificial Intelligence
Exploiting Geometric and Temporal Structure in Object-centric Skills Learning
- Gilwoo Lee, School of Computer Science & Engineering at the University of Washington
Bayesian Reinforcement Learning
- Dieter Fox, University of Washington, as well as NVIDIA AI Robotics Research Lab
Simulation for Training Manipulation Systems
Call for Papers
We solicit 2-4 page extended abstracts (following RSS style guidelines). The submissions can include: late-breaking results, under review material, or archived. We strongly encourage the preparation of live demos or videos accompanying the submission.
- Reducing the setup time through automatized training procedures
- What do we gain / sacrifice with the current methods that require less supervision?
- Transfer learning, learning in simulation, automatic labeling, reinforcement learning
- How can the interaction between sensing, manipulation and planning yield mutual benefits?
- e.g. Extracting semantic information from visual/tactile data for improved execution planning
- Scalable approaches for grasping novel objects and for generalizing functional grasps
- As end-to-end approaches gain traction, which benefits and limitations do they possess compared to modular pipelines?
- How can we achieve introspection in end-to-end methods?
- Are commonly used subtask metrics in modular approaches suitable indicators for execution success?
- Which possibilities for automatic recovery exist to create failure proof systems?
- Addressing the needs of end-user applications in terms of robustness, runtime, cost, maintainability, etc.
Submitted papers will be reviewed by the organizers and invited reviewers. Accepted contributions will be presented as posters or within the Demo/Video Talk format. Selected papers are further featured as spotlight talks. All accepted contributions and posters will be posted on the workshop website upon author approval.
Submission is now open on EasyChair SLIPP-2019RSS, please take note on the submission deadline.
Schedule
Time | Topic |
---|---|
8:00 - 9:00 | On-site Registration |
9:00 - 9:10 | Introductory Remarks |
9:10 - 9:50 | Invited Talk: Maxim Likhachev Offline Learning for Online Planning |
9:50 - 10:30 | Invited talk: Renaud Detry Combining Semantic and Geometric Scene Understanding: From Robot Manipulation to Planetary Exploration |
10:30 - 10:45 | Poster Spotlights |
10:45 - 11:30 | Coffee Break + Posters |
11:40 - 12:20 | Invited Talk: Sergey Levine Data-Driven Robotic Reinforcement Learning |
12:20 - 1:00 | Invited Talk: Leonel Rozo Exploiting Geometric and Temporal Structure in Object-centric Skills Learning |
1:00 - 1:45 | Lunch Break |
1:50 - 2:30 | Invited Talk: Gilwoo Lee Bayesian Reinforcement Learning |
2:30 - 3:15 | Coffee Break + Posters |
3:20 - 4:00 | Invited Talk: Dieter Fox Simulation for Training Manipulation Systems |
4:00 - 4:20 | Demo/Video Talks |
4:20 - 4:40 | Discussion with participation of the audience and experts |
IMPORTANT DATES
Timeline | |
---|---|
June 7, 2019 (23:59 Pacific Time) | Paper Submission Deadline |
June 14, 2019 (23:59 Pacific Time) | Paper Acceptance Notification |
June 22, 2019 (Saturday) | Workshop WS1-2 Scalable Learning for Integrated Perception and Planning Faculty of Engineering, Building 82, Room 00 006 |
Contact
Should you have any questions, please do not hesitate to contact the organizing committee at scalableroboticlearning@dlr.de:
- Maximilian Durner
- Martin Sundermeyer
- Zoltan Marton
- En Yen Puang
- Rudolph Triebel