Till sidans topp

Sidansvarig: Webbredaktion
Sidan uppdaterades: 2012-09-11 15:12

Tipsa en vän
Utskriftsversion

A runtime monitoring fram… - Göteborgs universitet Till startsida
Webbkarta
Till innehåll Läs mer om hur kakor används på gu.se

A runtime monitoring framework to enforce invariants on reinforcement learning agents exploring complex environments

Paper i proceeding
Författare Piergiuseppe Mallozzi
Ezequiel Castellano
Patrizio Pelliccione
Gerardo Schneider
Kenji Tei
Publicerad i RoSE 2019, IEEE/ACM 2nd International Workshop on Robotics Software Engineering, p.5-12
ISBN 978-1-7281-2249-6
Förlag IEEE
Publiceringsår 2019
Publicerad vid Institutionen för data- och informationsteknik (GU)
Språk en
Länkar https://ieeexplore.ieee.org/documen...
Ämnesord LTL invariants, Reinforcement learning, Reward shaping, Runtime monitoring
Ämneskategorier Data- och informationsvetenskap, Programvaruteknik, Datorteknik

Sammanfattning

© 2019 IEEE. Without prior knowledge of the environment, a software agent can learn to achieve a goal using machine learning. Model-free Reinforcement Learning (RL) can be used to make the agent explore the environment and learn to achieve its goal by trial and error. Discovering effective policies to achieve the goal in a complex environment is a major challenge for RL. Furthermore, in safety-critical applications, such as robotics, an unsafe action may cause catastrophic consequences in the agent or in the environment. In this paper, we present an approach that uses runtime monitoring to prevent the reinforcement learning agent to perform 'wrong' actions and to exploit prior knowledge to smartly explore the environment. Each monitor is de?ned by a property that we want to enforce to the agent and a context. The monitors are orchestrated by a meta-monitor that activates and deactivates them dynamically according to the context in which the agent is learning. We have evaluated our approach by training the agent in randomly generated learning environments. Our results show that our approach blocks the agent from performing dangerous and safety-critical actions in all the generated environments. Besides, our approach helps the agent to achieve its goal faster by providing feedback and shaping its reward during learning.

Sidansvarig: Webbredaktion|Sidan uppdaterades: 2012-09-11
Dela:

På Göteborgs universitet använder vi kakor (cookies) för att webbplatsen ska fungera på ett bra sätt för dig. Genom att surfa vidare godkänner du att vi använder kakor.  Vad är kakor?