Run-Time Optimization for Learned Controllers Through Quantitative Games

Guy Avni, Roderick Bloem, Krishnendu Chatterjee, Thomas A. Henzinger, Bettina Könighofer, Stefan Pranger

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

A controller is a device that interacts with a plant. At each time point, it reads the plant’s state and issues commands with the goal that the plant operates optimally. Constructing optimal controllers is a fundamental and challenging problem. Machine learning techniques have recently been successfully applied to train controllers, yet they have limitations. Learned controllers are monolithic and hard to reason about. In particular, it is difficult to add features without retraining, to guarantee any level of performance, and to achieve acceptable performance when encountering untrained scenarios. These limitations can be addressed by deploying quantitative run-time shields that serve as a proxy for the controller. At each time point, the shield reads the command issued by the controller and may choose to alter it before passing it on to the plant. We show how optimal shields that interfere as little as possible while guaranteeing a desired level of controller performance, can be generated systematically and automatically using reactive synthesis. First, we abstract the plant by building a stochastic model. Second, we consider the learned controller to be a black box. Third, we measure controller performance and shield interference by two quantitative run-time measures that are formally defined using weighted automata. Then, the problem of constructing a shield that guarantees maximal performance with minimal interference is the problem of finding an optimal strategy in a stochastic 2-player game “controller versus shield” played on the abstract state space of the plant with a quantitative objective obtained from combining the performance and interference measures. We illustrate the effectiveness of our approach by automatically constructing lightweight shields for learned traffic-light controllers in various road networks. The shields we generate avoid liveness bugs, improve controller performance in untrained and changing traffic situations, and add features to learned controllers, such as giving priority to emergency vehicles.

Original languageEnglish
Title of host publicationComputer Aided Verification - 31st International Conference, CAV 2019, Proceedings
EditorsIsil Dillig, Serdar Tasiran
PublisherSpringer Verlag
Pages630-649
Number of pages20
ISBN (Print)9783030255398
DOIs
StatePublished - 2019
Externally publishedYes
Event31st International Conference on Computer Aided Verification, CAV 2019 - New York City, United States
Duration: 15 Jul 201918 Jul 2019

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11561 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference31st International Conference on Computer Aided Verification, CAV 2019
Country/TerritoryUnited States
CityNew York City
Period15/07/1918/07/19

Bibliographical note

Publisher Copyright:
© The Author(s). 2019.

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Run-Time Optimization for Learned Controllers Through Quantitative Games'. Together they form a unique fingerprint.

Cite this