Automatic dimensionality reduction of Twin-in-the-Loop Observers (2401.10945v1)
Abstract: State-of-the-art vehicle dynamics estimation techniques usually share one common drawback: each variable to estimate is computed with an independent, simplified filtering module. These modules run in parallel and need to be calibrated separately. To solve this issue, a unified Twin-in-the-Loop (TiL) Observer architecture has recently been proposed: the classical simplified control-oriented vehicle model in the estimators is replaced by a full-fledged vehicle simulator, or digital twin (DT). The states of the DT are corrected in real time with a linear time invariant output error law. Since the simulator is a black-box, no explicit analytical formulation is available, hence classical filter tuning techniques cannot be used. Due to this reason, Bayesian Optimization will be used to solve a data-driven optimization problem to tune the filter. Due to the complexity of the DT, the optimization problem is high-dimensional. This paper aims to find a procedure to tune the high-complexity observer by lowering its dimensionality. In particular, in this work we will analyze both a supervised and an unsupervised learning approach. The strategies have been validated for speed and yaw-rate estimation on real-world data.
- K. B. Singh, M. A. Arat, and S. Taheri, “Literature review and fundamental approaches for vehicle and tire state estimation,” Vehicle System Dynamics, vol. 57, no. 11, pp. 1643–1665, 2019.
- M. Viehweger, C. Vaseur, S. van Aalst, M. Acosta, E. Regolin, A. Alatorre, W. Desmet, F. Naets, V. Ivanov, A. Ferrara, and A. Vittorino, “Vehicle state and tyre force estimation: demonstrations and guidelines,” Vehicle System Dynamics, vol. 59, no. 5, pp. 675–702, 2021.
- G. Riva, S. Formentin, M. Corno, and S. M. Savaresi, “Twin-in-the-loop state estimation for vehicle dynamics control: theory and experiments,” 2022. [Online]. Available: https://arxiv.org/abs/2204.06259
- F. Dettù, S. Formentin, and S. M. Savaresi, “The twin-in-the-loop approach for vehicle dynamics control,” IEEE/ASME Transactions on Mechatronics, pp. 1–12, 2023.
- F. Dettù, S. Formentin, and S. M. Savaresi, “Joint vehicle state and parameters estimation via twin-in-the-loop observers,” 2023. [Online]. Available: https://arxiv.org/abs/2309.01461
- F. Dettù, S. Formentin, and S. M. Savaresi, “Robust tuning of twin-in-the-loop vehicle dynamics controls via randomized optimization,” in Proc. 22nd IFAC World Congress, 2023.
- G. Delcaro, F. Dettù, S. Formentin, and S. M. Savaresi, “Dealing with the curse of dimensionality in twin-in-the-loop observer design,” in Proc. 22nd IFAC World Congress, 2023.
- A. J. Rodríguez, E. Sanjurjo, R. Pastorino, and M. Ángel Naya, “State, parameter and input observers based on multibody models and kalman filters for vehicle dynamics,” Mechanical Systems and Signal Processing, vol. 155, p. 107544, 2021.
- L.-Y. Hsu and T.-L. Chen, “Vehicle full-state estimation and prediction system using state observers,” IEEE Transactions on Vehicular Technology, vol. 58, no. 6, pp. 2651–2662, 2009.
- R. E. Kalman, “A New Approach to Linear Filtering and Prediction Problems,” Journal of Basic Engineering, vol. 82, no. 1, pp. 35–45, 1960.
- M. I. Ribeiro, “Kalman and extended kalman filters: Concept, derivation and properties,” Institute for Systems and Robotics, vol. 43, no. 46, pp. 3736–3741, 2004.
- E. Wan and R. Van Der Merwe, “The unscented kalman filter for nonlinear estimation,” in Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No.00EX373), 2000, pp. 153–158.
- S. K. Spurgeon, “Sliding mode observers: a survey,” International Journal of Systems Science, vol. 39, no. 8, pp. 751–764, 2008.
- B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. de Freitas, “Taking the human out of the loop: A review of bayesian optimization,” Proceedings of the IEEE, vol. 104, no. 1, pp. 148–175, 2016.
- P. I. Frazier, “A tutorial on bayesian optimization,” 2018. [Online]. Available: https://arxiv.org/abs/1807.02811
- D. Eriksson and M. Jankowiak, “High-dimensional Bayesian optimization with sparse axis-aligned subspaces,” in Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, ser. Proceedings of Machine Learning Research, C. de Campos and M. H. Maathuis, Eds., vol. 161. PMLR, 27–30 Jul 2021, pp. 493–503. [Online]. Available: https://proceedings.mlr.press/v161/eriksson21a.html
- R. Moriconi, M. P. Deisenroth, and K. Sesh Kumar, “High-dimensional bayesian optimization using low-dimensional feature spaces,” Machine Learning, vol. 109, pp. 1925–1943, 2020.
- M. Binois and N. Wycoff, “A survey on high-dimensional gaussian process modeling with application to bayesian optimization,” ACM Transactions on Evolutionary Learning and Optimization, vol. 2, no. 2, pp. 1–26, 2022.
- L. Sabug, F. Ruiz, and L. Fagiano, “SMGO-ΔΔ\Deltaroman_Δ: Balancing caution and reward in global optimization with black-box constraints,” Information Sciences, vol. 605, pp. 15–42, 2022. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0020025522004376
- M. Köppen, “The curse of dimensionality,” in 5th online world conference on soft computing in industrial applications (WSC5), vol. 1, 2000, pp. 4–8.
- W. Jia, M. Sun, J. Lian, and S. Hou, “Feature dimensionality reduction: a review,” Complex & Intelligent Systems, vol. 8, no. 3, pp. 2663–2693, 2022.
- J. P. Cunningham and Z. Ghahramani, “Linear dimensionality reduction: Survey, insights, and generalizations,” The Journal of Machine Learning Research, vol. 16, no. 1, pp. 2859–2900, 2015.
- S. Wright, “Sparse optimization methods,” in Conference on Advanced Methods and Perspectives in Nonlinear Optimization and Control, 2010.
- E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Communications on Pure and Applied Mathematics, vol. 59, no. 8, pp. 1207–1223, 2006.
- E. Candès and B. Recht, “Exact matrix completion via convex optimization,” Commun. ACM, vol. 55, no. 6, p. 111–119, jun 2012. [Online]. Available: https://doi.org/10.1145/2184319.2184343
- E. J. Candes and Y. Plan, “Matrix completion with noise,” Proceedings of the IEEE, vol. 98, no. 6, pp. 925–936, 2010.
- N. V. Queipo, R. T. Haftka, W. Shyy, T. Goel, R. Vaidyanathan, and P. Kevin Tucker, “Surrogate-based analysis and optimization,” Progress in Aerospace Sciences, vol. 41, no. 1, pp. 1–28, 2005.
- A. I. Forrester and A. J. Keane, “Recent advances in surrogate-based optimization,” Progress in Aerospace Sciences, vol. 45, no. 1, pp. 50–79, 2009.
- A. Bemporad, “Global optimization via inverse distance weighting and radial basis functions,” Computational Optimization and Applications, vol. 77, no. 2, pp. 571–595, 2020.
- D. J. MacKay and R. M. Neal, “Automatic relevance determination for neural networks,” in Technical Report in preparation. Cambridge University, 1994.
- S. H. Rudy and T. P. Sapsis, “Sparse methods for automatic relevance determination,” Physica D: Nonlinear Phenomena, vol. 418, p. 132843, 2021. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0167278921000014
- M. Khosravi, C. König, M. Maier, R. S. Smith, J. Lygeros, and A. Rupenyan, “Safety-aware cascade controller tuning using constrained bayesian optimization,” IEEE Transactions on Industrial Electronics, vol. 70, no. 2, pp. 2128–2138, 2023.
- M. A. Gelbart, J. Snoek, and R. P. Adams, “Bayesian optimization with unknown constraints,” 2014.
- D. Ginsbourger, R. Le Riche, and L. Carraro, “A multi-points criterion for deterministic parallel global optimization based on gaussian processes,” 2008.
- H. Liu, Y.-S. Ong, X. Shen, and J. Cai, “When gaussian process meets big data: A review of scalable gps,” IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 11, pp. 4405–4423, 2020.
- Z. Zhang, Y. Xu, J. Yang, X. Li, and D. Zhang, “A survey of sparse representation: Algorithms and applications,” IEEE Access, vol. 3, pp. 490–530, 2015.