Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 87 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 102 tok/s Pro
Kimi K2 166 tok/s Pro
GPT OSS 120B 436 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Predictor-Feedback Stabilization of Globally Lipschitz Nonlinear Systems with State and Input Quantization (2501.14696v1)

Published 24 Jan 2025 in math.OC, cs.SY, and eess.SY

Abstract: We develop a switched nonlinear predictor-feedback control law to achieve global asymptotic stabilization for nonlinear systems with arbitrarily long input delay, under state quantization. The proposed design generalizes the nonlinear predictor-feedback framework by incorporating quantized measurements of both the plant and actuator states into the predictor state formulation. Due to the mismatch between the (inapplicable) exact predictor state and the predictor state constructed in the presence of state quantization, a global stabilization result is possible under a global Lipschitzness assumption on the vector field, as well as under the assumption of existence of a globally Lipschitz, nominal feedback law that achieves global exponential stability of the delay and quantization-free system. To address the constraints imposed by quantization, a dynamic switching strategy is constructed, adjusting the quantizer's tunable parameter in a piecewise constant manner-initially increasing the quantization range, to capture potentially large system states and subsequently refining the precision to reduce quantization error. The global asymptotic stability of the closed-loop system is established through solutions estimates derived using backstepping transformations, combined with small-gain and input-to-state stability arguments. We also extend our approach to the case of input quantization.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 post and received 2 likes.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube