Shielded AI for Hybrid Systems

Student thesis: Master thesis (including HD thesis)

TitleGraphic.png
  • Asger Horn Brorholt
When safe behaviour of a reinforcement learning agent or other complex model cannot be guaranteed through verification methods, a shield can be put in place to enforce a given safety property.
Safe actions suggested by the model are taken without modification, while potentially unsafe actions are corrected according to a synthesized safety strategy.

A technique is developed for hybrid systems, to distinguish between safe and unsafe actions, for a given safety property.

The technique was applied to two problems of a finite and infinite time horizon respectively, and the resulting safe strategy was successfully imported into \uppaalstratego{} to make use of the tool's capabilities for machine learning and statistical model checking while running experiments on the effects of shielding.

When learning occurs with the developed shield in place, learning outcomes are seen to be as good, or better than, their unshielded counterparts.
LanguageEnglish
Publication date2022
Number of pages50

Images

Shield.png
BBShield.png
RWShield.png
ID: 472520536