Distributed Safe Learning

Main content

Various learning algorithms enable complex systems to be identified and controlled by exploring how the system reacts in different situations and exploiting this information to get detailed information depending on the goal of the learning process. During the exploration, it is important to constrain the possible actions to ensure that the learning system does not endanger itself, e.g. limit the velocity-input of a quadcopter that flies close to a tree in order to prevent a crash.

Systems that are distributed, and hence consist of several cooperating, but locally controlled subsystems, do not only have to satisfy these local constraints, but also have to satisfy coupled constraints that act on the state of neighbouring subsystems, in order to stay safe. E.g. in a fabrication plant, the outlet flow from a tank may be optimal with respect to the internal processes in the tank, but if a small downstream outlet basin is being filled from several tanks, blindly increasing the outlet flow may lead to overflow.

Since many learning algorithms are unable to ensure the safety of the learning systems, we develop a framework that ensures the safety of the applied actions while restricting the learning as little as possible and requiring communication only among neighbouring subsystems.

Keyboard navigation between tabs via Alt arrow keys as well as Home and End.

R. B. Larsen, A. Carron and M. N. Zeilinger. Safe Learning for Distributed Systems with Bounded Uncertainties. IFAC World Congress, 2017.

Page URL: http://www.idsc.ethz.ch/research-zeilinger/research-projects/distributed-safe-learning.html
Sat Jun 24 07:02:21 CEST 2017
© 2017 Eidgenössische Technische Hochschule Zürich