Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Leveraging Gated Recurrent Units for Iterative Online Precise Attitude Control for Geodetic Missions (2405.15159v1)

Published 24 May 2024 in eess.SY and cs.SY

Abstract: In this paper, we consider the problem of precise attitude control for geodetic missions, such as the GRACE Follow-on (GRACE-FO) mission. Traditional and well-established control methods, such as Proportional-Integral-Derivative (PID) controllers, have been the standard in attitude control for most space missions, including the GRACE-FO mission. Instead of significantly modifying (or replacing) the original PID controllers that are being used for these missions, we introduce an iterative modification to the PID controller that ensures improved attitude control precision (i.e., reduction in attitude error). The proposed modification leverages Gated Recurrent Units (GRU) to learn and predict external disturbance trends derived from incoming attitude measurements from the GRACE satellites. Our analysis has revealed a distinct trend in the external disturbance time-series data, suggesting the potential utility of GRU's to predict future disturbances acting on the system. The learned GRU model compensates for these disturbances within the standard PID control loop in real time via an additive correction term which is updated at regular time intervals. The simulation results verify the significant reduction in attitude error, verifying the efficacy of our proposed approach.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (34)
  1. Tapley, B. D., Bettadpur, S., Watkins, M., and Reigber, C., “The gravity recovery and climate experiment: Mission overview and early results,” Geophysical research letters, Vol. 31, No. 9, 2004.
  2. Kornfeld, R. P., Arnold, B. W., Gross, M. A., Dahya, N. T., Klipstein, W. M., Gath, P. F., and Bettadpur, S., “GRACE-FO: the gravity recovery and climate experiment follow-on mission,” Journal of spacecraft and rockets, Vol. 56, No. 3, 2019, pp. 931–951.
  3. Landerer, F. W., Flechtner, F. M., Save, H., Webb, F. H., Bandikova, T., Bertiger, W. I., Bettadpur, S. V., Byun, S. H., Dahle, C., Dobslaw, H., et al., “Extending the global mass change data record: GRACE Follow-On instrument and science data performance,” Geophysical Research Letters, Vol. 47, No. 12, 2020, p. e2020GL088306.
  4. Abich, K., Abramovici, A., Amparan, B., Baatzsch, A., Okihiro, B. B., Barr, D. C., Bize, M. P., Bogan, C., Braxmaier, C., Burke, M. J., et al., “In-orbit performance of the GRACE follow-on laser ranging interferometer,” Physical review letters, Vol. 123, No. 3, 2019, p. 031101.
  5. Rosen, M. D., “Analysis of hybrid satellite-to-satellite tracking and quantum gravity gradiometry architecture for time-variable gravity sensing missions,” Ph.D. thesis, 2021.
  6. Han, J., “From PID to active disturbance rejection control,” IEEE transactions on Industrial Electronics, Vol. 56, No. 3, 2009, pp. 900–906.
  7. Zhang, C., He, J., Duan, L., and Kang, Q., “Design of an active disturbance rejection control for drag-free satellite,” Microgravity Science and Technology, Vol. 31, 2019, pp. 31–48.
  8. Narendra, K. S., and Parthasarathy, K., “Neural networks and dynamical systems,” International Journal of Approximate Reasoning, Vol. 6, No. 2, 1992, pp. 109–131.
  9. González-García, R., Rico-Martìnez, R., and Kevrekidis, I. G., “Identification of distributed parameter systems: A neural net based approach,” Computers & chemical engineering, Vol. 22, 1998, pp. S965–S968.
  10. Chen, S., Billings, S. A., and Grant, P., “Non-linear system identification using neural networks,” International journal of control, Vol. 51, No. 6, 1990, pp. 1191–1214.
  11. Milano, M., and Koumoutsakos, P., “Neural network modeling for near wall turbulent flow,” Journal of Computational Physics, Vol. 182, No. 1, 2002, pp. 1–26.
  12. Rico-Martinez, R., and Kevrekidis, I. G., “Continuous time modeling of nonlinear systems: A neural network-based approach,” IEEE International Conference on Neural Networks, IEEE, 1993, pp. 1522–1525.
  13. Vlachas, P. R., Byeon, W., Wan, Z. Y., Sapsis, T. P., and Koumoutsakos, P., “Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks,” Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, Vol. 474, No. 2213, 2018, p. 20170844.
  14. Pan, S., and Duraisamy, K., “Long-time predictive modeling of nonlinear dynamical systems using neural networks,” Complexity, Vol. 2018, 2018.
  15. Pathak, J., Wikner, A., Fussell, R., Chandra, S., Hunt, B. R., Girvan, M., and Ott, E., “Hybrid forecasting of chaotic processes: Using machine learning in conjunction with a knowledge-based model,” Chaos: An Interdisciplinary Journal of Nonlinear Science, Vol. 28, No. 4, 2018, p. 041101.
  16. Lu, Z., Hunt, B. R., and Ott, E., “Attractor reconstruction by machine learning,” Chaos: An Interdisciplinary Journal of Nonlinear Science, Vol. 28, No. 6, 2018, p. 061104.
  17. Pascanu, R., Mikolov, T., and Bengio, Y., “On the difficulty of training recurrent neural networks,” International conference on machine learning, Pmlr, 2013, pp. 1310–1318.
  18. Hochreiter, S., and Schmidhuber, J., “Long short-term memory,” Neural computation, Vol. 9, No. 8, 1997, pp. 1735–1780.
  19. Cho, K., van Merrienboer, B., Bahdanau, D., and Bengio, Y., “On the Properties of Neural Machine Translation: Encoder-Decoder Approaches,” Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, 2014a.
  20. Zinage, S. V., “INVESTIGATION OF DIFFERENT DATA DRIVEN APPROACHES FOR MODELING ENGINEERED SYSTEMS,” , 2022. 10.25394/PGS.21671480.v1.
  21. Shopov, V., and Markova, V., “Identification of non-linear dynamic system,” 2019 International Conference on Information Technologies (InfoTech), IEEE, 2019, pp. 1–3.
  22. Bebis, G., and Georgiopoulos, M., “Feed-forward neural networks,” Ieee Potentials, Vol. 13, No. 4, 1994, pp. 27–31.
  23. Dubey, S. R., Singh, S. K., and Chaudhuri, B. B., “Activation functions in deep learning: A comprehensive survey and benchmark,” Neurocomputing, Vol. 503, 2022, pp. 92–108.
  24. Schmidt, R. M., “Recurrent neural networks (rnns): A gentle introduction and overview,” arXiv preprint arXiv:1912.05911, 2019.
  25. Bengio, Y., Simard, P., and Frasconi, P., “Learning long-term dependencies with gradient descent is difficult,” IEEE transactions on neural networks, Vol. 5, No. 2, 1994, pp. 157–166.
  26. Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., and Bengio, Y., “Learning phrase representations using RNN encoder-decoder for statistical machine translation,” arXiv preprint arXiv:1406.1078, 2014b.
  27. Kinzie, R., Bevilacqua, R., Seo, D., Conklin, J. W., and Wass, P. J., “Dual quaternion-based dynamics and control for gravity recovery missions,” Acta Astronautica, Vol. 213, 2023, pp. 764–776.
  28. Celani, F., “Robust three-axis attitude stabilization for inertial pointing spacecraft using magnetorquers,” Acta Astronautica, Vol. 107, 2015, pp. 87–96.
  29. Ovchinnikov, M. Y., Roldugin, D., and Penkov, V., “Three-axis active magnetic attitude control asymptotical study,” Acta Astronautica, Vol. 110, 2015, pp. 279–286.
  30. Lovera, M., and Astolfi, A., “Global magnetic attitude control of inertially pointing spacecraft,” Journal of guidance, control, and dynamics, Vol. 28, No. 5, 2005, pp. 1065–1072.
  31. Bradbury, J., Frostig, R., Hawkins, P., Johnson, M. J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., and Zhang, Q., “JAX: composable transformations of Python+NumPy programs,” , 2018. URL http://github.com/google/jax.
  32. Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al., “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems, Vol. 32, 2019.
  33. Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., et al., “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” arXiv preprint arXiv:1603.04467, 2016.
  34. Kingma, D. P., and Ba, J., “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
Citations (1)

Summary

We haven't generated a summary for this paper yet.