Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Active Control of Flow over Rotating Cylinder by Multiple Jets using Deep Reinforcement Learning (2307.12083v3)

Published 22 Jul 2023 in physics.flu-dyn and cs.LG

Abstract: The real power of artificial intelligence appears in reinforcement learning, which is computationally and physically more sophisticated due to its dynamic nature. Rotation and injection are some of the proven ways in active flow control for drag reduction on blunt bodies. In this paper, rotation will be added to the cylinder alongside the deep reinforcement learning (DRL) algorithm, which uses multiple controlled jets to reach the maximum possible drag suppression. Characteristics of the DRL code, including controlling parameters, their limitations, and optimization of the DRL network for use with rotation will be presented. This work will focus on optimizing the number and positions of the jets, the sensors location, and the maximum allowed flow rate to jets in the form of the maximum allowed flow rate of each actuation and the total number of them per episode. It is found that combining the rotation and DRL is promising since it suppresses the vortex shedding, stabilizes the Karman vortex street, and reduces the drag coefficient by up to 49.75%. Also, it will be shown that having more sensors at more locations is not always a good choice and the sensor number and location should be determined based on the need of the user and corresponding configuration. Also, allowing the agent to have access to higher flow rates, mostly reduces the performance, except when the cylinder rotates. In all cases, the agent can keep the lift coefficient at a value near zero, or stabilize it at a smaller number.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (53)
  1. M.Gad el Hak. Flow control: passive, active, and reactive flow management. Cambridge University Press, 2007.
  2. Advances and challenges in periodic forcing of the turbulent boundary layer on a body of revolution. Progress in Aerospace Sciences, 98:57–73, 2018.
  3. Closed-loop turbulence control: Progress and challenges. Appl. Mech., 67:050801, 2015.
  4. Machine Learning Control-Taming Nonlinear Dynamics and Turbulence. Springer, 2016.
  5. Optimal rotary control of the cylinder wake using proper orthogonal decomposition reduced-order model. Phys. Fluids, 17:097101, 2005.
  6. T. L. B. Flinois and T. Colonius. Optimal control of circular cylinder wakes using long control horizons. Phys. Fluids, 27:087105, 2015.
  7. Application of reduced-order controller to turbulent flows for drag reduction. Phys. Fluids, 13:1321–1330, 2001.
  8. F. Dupuy A. Misdariis M. Queguineur, L. Y. M. Gicquel and G. Staffelbach. Dynamic mode tracking and control with a relaxation method. Phys. Fluids, 31:034101, 2019.
  9. J. Kim and H. Choi. Distributed forcing of flow over a circular cylinder. Phys. Fluids, 17:33103, 2005.
  10. Flow control over a circular cylinder using virtual moving surface boundary layer control. Experiments in Fluids, 60:104, 2019.
  11. K. Okada K. Asada H. Aono A. Yakeno Y. Abe M. Sato, T. Nonomura and K. Fujii. Mechanisms for laminar separated-flow control using dielectric-barrier discharge plasma actuator at low reynolds number. Phys. Fluids, 27:117101, 2015.
  12. S. C. M. Yu C. Wang, H. Tang and F. Duan. Active control of vortex-induced vibrations of a circular cylinder using windward-suction-leeward-blowing actuation. Phys. Fluids, 28:053601, 2016.
  13. F. Li and N. Aubry. Feedback control of a flow past a cylinder via transverse motion. Phys. Fluids, 15:2163–2176, 2003.
  14. B. Teng L. Lu, J.-M. Qin and Y.-C. Li. Numerical investigations of lift suppression by feedback rotary oscillation of circular cylinder at low reynolds number. Phys. Fluids, 23:033601, 2011.
  15. Y. Bao and J. Tao. Active control of a cylinder wake flow by using a streamwise oscillating foil. Phys. Fluids, 25:053601, 2013.
  16. Control of vortex-induced vibration of a circular cylinder using a pair of air jets at low reynolds number. Phys. Fluids, 31:043603, 2019.
  17. C. W. Rowley B. A. Belson, O. Semeraro and D. S. Henningson. Feedback control of instabilities in the two-dimensional blasius boundary layer: The role of sensors and actuators. Phys. Fluids, 25:054106, 2013.
  18. L. Cao S. J. Schmidt D. Likhachev B. Che, N. Chu and D. Wu. Control effect of micro vortex generators on attached cavitation instability. Phys. Fluids, 31:064102, 2019.
  19. J. Wang F. Liu X. Meng, Y. Long and S. Luo. Dynamics and control of the vortex flow behind a slender conical forebody by a pair of plasma actuators. Phys. Fluids, 30:024101, 2018.
  20. Control of unsteady flow separation over a circular cylinder using dielectric-barrier-discharge surface plasma. Phys. Fluids, 21:094106, 2009.
  21. Open-loop control of compressible afterbody flows using adjoint methods. Phys. Fluids, 22:054109, 2010.
  22. A. F. Shahrabi. The control of flow separation: Study of optimal open loop parameters. Phys. Fluids, 31:035104, 2019.
  23. G. Novati S. Verma and P. Koumoutsakos. Efficient collective swimming by harnessing vortices through deep reinforcement learning. Proc. Natl. Acad. Sci. U. S. A., 115:5849, 2018.
  24. A. Jensen U. Réglade J. Rabault, M. Kuchta and N. Cerardi. Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control. J. Fluid Mech, 865:281–302, 2019.
  25. Howes Philip D. Choo Jaebum deMello Andrew J. Dressler, Oliver J. Reinforcement learning for dynamic microfluidic control. ACS Omega, 8:10084–10091, 2018.
  26. Alexander Kuhnle Jean Rabault. Accelerating deep reinforcement learning strategies of flow control through a multi-environment approach. Physics of Fluids, 31:094105, 2019.
  27. Controlled gliding and perching through deep-reinforcement-learning. Phys. Rev. Fluids, 4:093902, 2019.
  28. Alexander Kuhnle Yan Wang Tongguang Wang Hongwei Tang, Jean Rabault. Robust active flow control over a range of reynolds numbers using an artificial neural network trained through deep reinforcement learning. Physics of Fluids, 32:053605, 2020.
  29. Zhang W. Deng J. et al. Xu, H. Active flow control with rotating cylinders by an artificial neural network trained by deep reinforcement learning. J Hydrodyn, 32:254–258, 2020.
  30. Reinforcement learning for bluff body active flow control in experiments and simulations. Proceedings of the National Academy of Sciences, 117:26091–26098, 2020.
  31. Hui Tang Feng Ren, Jean Rabault. Applying deep reinforcement learning to active flow control in weakly turbulent conditions. Physics of Fluids, 33:037121, 2021.
  32. Beneddine S. & Dandois J. Paris, R. Robust flow control and optimal sensor placement using deep reinforcement learning. Journal of Fluid Mechanics, 913:25, 2021.
  33. Fangfang Xie Xinshuai Zhang Hongyu Zheng Yao Zheng Changdong Zheng, Tingwei Ji. From active learning to deep reinforcement learning: Intelligent active flow control in suppressing vortex-induced vibration. Physics of Fluids, 33:063607, 2021.
  34. Hui Tang Feng Ren, Chenglei Wang. Bluff body uses deep-reinforcement-learning trained active flow control to achieve hydrodynamic stealth. Physics of Fluids, 33:093602, 2021.
  35. Single-step deep reinforcement learning for open-loop control of laminar and turbulent flows. Phys. Rev. Fluids, 6:053902, 2021.
  36. Prafulla Dhariwal Alec Radford Oleg Klimov John Schulman, Filip Wolski. Proximal policy optimization algorithms. arXiv, 1707:06347, 2017.
  37. Active control for the flow around various geometries through deep reinforcement learning. Fluid Dynamics Research, 54:015510, 2022.
  38. Ren F. Zhang W. et al. Rabault, J. Deep reinforcement learning in fluid mechanics: A promising method for both active flow control and shape optimization. J Hydrodyn, 32:234–246, 2020.
  39. Direct shape optimization through deep reinforcement learning. Journal of Computational Physics, 428:110080, 2021.
  40. Deep reinforcement learning for heat exchanger shape optimization. International Journal of Heat and Mass Transfer, 194:123112, 2022.
  41. An active-controlled heaving plate breakwater trained by an intelligent framework based on deep reinforcement learning. Ocean Engineering, 244:110357, 2022.
  42. George Thuruthel T. & Iida F. Hardman, D. Manipulation of free-floating objects using faraday flows and deep reinforcement learning. Sci Rep, 12:335, 2022.
  43. Bengio Y. & Hinton G. LeCun, Y. Deep learning. Nature, 521:436–444, 2015.
  44. Bengio Yoshua Courville Aaron Goodfellow, Ian. Deep learning. MIT press Cambridge, 2016.
  45. Stinchcombe Maxwell & White Halbert Hornik, Kurt. Multilayer feedforward networks are universal approximators. Neural Networks, 2:359 – 366, 1989.
  46. S. MITTAL. Control of flow past bluff bodies using rotating control cylinders. Journal of Fluids and Structures, 15:291–326, 2001.
  47. M.R.H. Nobari and J. Ghazanfarian. A numerical investigation of fluid flow over a rotating cylinder with cross flow oscillation. Journal of Fluids and Structures, 38:2026–2036, 2009.
  48. Turek S. Durst F. Krause E. Rannacher R. Schäfer, M. Benchmark Computations of Laminar Flow Around a Cylinder. Flow Simulation with High-Performance Computers II: DFG Priority Research Programme Results 1993–1995, 1996.
  49. Jean-Francois Geuzaine, Christophe & Remacel. Gmsh: A 3-d finite element mesh generator with built-in pre-and post-processing facilities. International journal for numerical methods in engineering, 79:1309–1331, 2009.
  50. Katuhiko Goda. A multistep technique with implicit difference schemes for calculating two- or three-dimensional cavity flows. Journal of Computational Physics, 30:76 – 95, 1979.
  51. Mardal Kent-Andre & Wells Garth Logg, Anders. A multistep technique with implicit difference schemes for calculating two- or three-dimensional cavity flows. Springer Science & Business Media, 84, 2012.
  52. Kavukcuoglu Koray-Silver David Graves Alex Antonoglou Ioannis Wierstra Daan & Riedmiller Martin Mnih, Volodymyr. Playing atari with deep reinforcement learning. arXiv, 1312:5602, 2013.
  53. Lillicrap Timothy-Sutskever Ilya & Levine Sergey Gu, Shixiang. Continuous deep q-learning with model-based acceleration. In International Conference on Machine Learning, page 2829–2838, 2016.

Summary

We haven't generated a summary for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com