Model-Based Reinforcement Learning via Stochastic Hybrid Models. (arXiv:2111.06211v3 [eess.SY] UPDATED)

Optimal control of general nonlinear systems is a central challenge in
automation. Enabled by powerful function approximators, data-driven approaches
to control have recently successfully tackled challenging applications.
However, such methods often obscure the structure of dynamics and control
behind black-box over-parameterized representations, thus limiting our ability
to understand closed-loop behavior. This paper adopts a hybrid-system view of
nonlinear modeling and control that lends an explicit hierarchical structure to
the problem and breaks down complex dynamics into simpler localized units. We
consider a sequence modeling paradigm that captures the temporal structure of
the data and derive an expectation-maximization (EM) algorithm that
automatically decomposes nonlinear dynamics into stochastic piecewise affine
models with nonlinear transition boundaries. Furthermore, we show that these
time-series models naturally admit a closed-loop extension that we use to
extract local polynomial feedback controllers from nonlinear experts via
behavioral cloning. Finally, we introduce a novel hybrid relative entropy
policy search (Hb-REPS) technique that incorporates the hierarchical nature of
hybrid models and optimizes a set of time-invariant piecewise feedback
controllers derived from a piecewise polynomial approximation of a global
state-value function.

DoctorMorDi

DoctorMorDi

Moderator and Editor