Loading Events

Toward End-to-end Reliable Robot Learning for Autonomy and Interaction

Robots must behave safely and reliably if we are to confidently deploy them in the real world around humans. To complete tasks, robots must manage a complex, interconnected autonomy stack […]

Mar 15

March 15, 2024

9:00 am - 9:00 am

  • Hudson Hall 125

Robots must behave safely and reliably if we are to confidently deploy them in the real world around humans. To complete tasks, robots must manage a complex, interconnected autonomy stack of perception, planning, and control software. While machine learning has unlocked the potential for full-stack end-to-end control in the real world, these methods can be catastrophically unreliable. In contrast, model-based safety-critical control provides rigorous guarantees, but struggles to scale to real systems, where common assumptions, e.g., perfect task specification and perception, break down.

However, we need not choose between real-world utility and safety. By taking an end-to-end approach to safety-critical control that builds and leverages knowledge of where learned components can be trusted, we can build practical yet rigorous algorithms that can make real robots more reliable. I will first discuss how to make task specification easier and safer by learning hard constraints from human task demonstrations, and how we can plan safely with these learned specifications despite uncertainty. Then, given a task specification, I will discuss how we can reliably leverage learned dynamics and perception for planning and control by estimating where these learned models are accurate, enabling probabilistic guarantees for end-to-end vision-based control. Finally, I will provide perspectives on open challenges and future opportunities, including robust perception-based hybrid control algorithms for reliable data-driven robotic manipulation and human-robot collaboration.