This work provides a rigorous framework for studying continuous-time control problems in uncertain environments. The framework models uncertainty in state dynamics as a probability measure on the space of functions. Such a probability measure is permitted to change over time as agents learn about their environment. This model can be seen as a variant of either Bayesian reinforcement learning (RL) or adaptive optimal control. We study conditions for locally optimal trajectories within this model, in particular deriving an appropriate dynamic programming principle and Hamilton–Jacobi equations. Some discussion of variants of the model are also provided, including one potential framework for studying the tradeoff between exploration and exploitation in RL.

A model for system uncertainty in reinforcement learning

Palladino M
2018-01-01

Abstract

This work provides a rigorous framework for studying continuous-time control problems in uncertain environments. The framework models uncertainty in state dynamics as a probability measure on the space of functions. Such a probability measure is permitted to change over time as agents learn about their environment. This model can be seen as a variant of either Bayesian reinforcement learning (RL) or adaptive optimal control. We study conditions for locally optimal trajectories within this model, in particular deriving an appropriate dynamic programming principle and Hamilton–Jacobi equations. Some discussion of variants of the model are also provided, including one potential framework for studying the tradeoff between exploration and exploitation in RL.
2018
Dynamic programming, Learning systems, Machine learning, Adaptive control
File in questo prodotto:
File Dimensione Formato  
2018_SystControlLett_122_Murray.pdf

non disponibili

Tipologia: Versione Editoriale (PDF)
Licenza: Non pubblico
Dimensione 367.3 kB
Formato Adobe PDF
367.3 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12571/7606
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 13
  • ???jsp.display-item.citation.isi??? ND
social impact