When safety is critical, control systems are traditionally designed to act in isolated, clearly specified environments, or to be conservative against the unknown. The goal of our research is to make high performance control available for safety-critical systems that act in varying, uncertain environments, and which are potentially large-scale, composed of numerous interconnected subsystems, and, most importantly, involve human interaction.
A new opportunity to address these challenges is offered by modern computing, sensor and communication technologies, providing increasing computational power and data at previously unseen detail and scale. We develop intelligent control systems that can exploit data, while offering system theoretic guarantees. Our research integrates control, learning and optimization to provide new theoretical frameworks and highly efficient computational tools in three core areas.
Safe Learning-based Control: A key limitation of many learning methods is that they do not provide any safety guarantees. This research studies the intersection of learning and control and develops new methods that enable online learning while providing guarantees on the closed-loop behavior of the system.
Distributed Plug and Play Control: The control of large-scale networks of dynamical systems requires a distributed structure, where each controller only has limited information about the global system to be controlled. We develop methods for networks with varying topologies, as well as limited local computation and information exchange.
Human in the Loop Control Systems: Many modern control systems operate in direct interaction with humans, where neglecting this interplay not only becomes a performance bottleneck, but also a safety risk. Our goal is to integrate humans as active participants into the control design as operators, or as part of dynamical systems. We particularly focus on the development of predictive safety systems and personalized energy systems.