Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cocktail: Learn a Better Neural Network Controller from Multiple Experts via Adaptive Mixing and Robust Distillation (2103.05046v1)

Published 8 Mar 2021 in eess.SY and cs.SY

Abstract: Neural networks are being increasingly applied to control and decision-making for learning-enabled cyber-physical systems (LE-CPSs). They have shown promising performance without requiring the development of complex physical models; however, their adoption is significantly hindered by the concerns on their safety, robustness, and efficiency. In this work, we propose COCKTAIL, a novel design framework that automatically learns a neural network-based controller from multiple existing control methods (experts) that could be either model-based or neural network-based. In particular, COCKTAIL first performs reinforcement learning to learn an optimal system-level adaptive mixing strategy that incorporates the underlying experts with dynamically-assigned weights and then conducts a teacher-student distillation with probabilistic adversarial training and regularization to synthesize a student neural network controller with improved control robustness (measured by a safe control rate metric with respect to adversarial attacks or measurement noises), control energy efficiency, and verifiability (measured by the computation time for verification). Experiments on three non-linear systems demonstrate significant advantages of our approach on these properties over various baseline methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yixuan Wang (95 papers)
  2. Chao Huang (244 papers)
  3. Zhilu Wang (14 papers)
  4. Shichao Xu (12 papers)
  5. Zhaoran Wang (164 papers)
  6. Qi Zhu (160 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.