Hybrid systems combine discrete and continuous dynamics, which makes them attractive as models for systems that combine computer control with physical motion. However, verification of hybrid systems is undecidable in general and challenging for many systems of practical interest. Even when verification results are obtainable, verification only provides from guarantees when reality matches the verified model. Conversely, reinforcement learning-based controllers are lauded for their flexibility in unmodeled environments, but also do not typically provide guarantees of safe operation.
This talk describes a set of tooling and theory that allows system designers to obtain safety proofs for hybrid systems that incorporate reinforcement learning algorithms. Our main insight enabling efficient verification of hybrid systems is a proof development language, called Bellerophon, that provides a way to conveyinsights by programming hybrid systems proofs. Our approach toward safe reinforcement learning for hybrid systems leverages these verification results to provide safety guarantees for reinforcement learning controllers.
Presented in Partial Fulfillment of the CSD Speaking Skills Requirement