Shielded AI for Hybrid Systems
Translated title
AI med Skjold for Hybride Systemer
Author
Term
4. Term
Publication year
2022
Submitted on
2022-06-10
Pages
50
Abstract
When safe behaviour of a reinforcement learning agent or other complex model cannot be guaranteed through verification methods, a shield can be put in place to enforce a given safety property. Safe actions suggested by the model are taken without modification, while potentially unsafe actions are corrected according to a synthesized safety strategy. A technique is developed for hybrid systems, to distinguish between safe and unsafe actions, for a given safety property. The technique was applied to two problems of a finite and infinite time horizon respectively, and the resulting safe strategy was successfully imported into \uppaalstratego{} to make use of the tool's capabilities for machine learning and statistical model checking while running experiments on the effects of shielding. When learning occurs with the developed shield in place, learning outcomes are seen to be as good, or better than, their unshielded counterparts.
Keywords
Documents
