Automated Design and Optimization of Distributed Filtering Circuits via Reinforcement Learning (2402.14236v2)
Abstract: Designing distributed filter circuits (DFCs) is complex and time-consuming, involving setting and optimizing multiple hyperparameters. Traditional optimization methods, such as using the commercial finite element solver HFSS (High-Frequency Structure Simulator) to enumerate all parameter combinations with fixed steps and then simulate each combination, are not only time-consuming and labor-intensive but also rely heavily on the expertise and experience of electronics engineers, making it difficult to adapt to rapidly changing design requirements. Additionally, these commercial tools struggle with precise adjustments when parameters are sensitive to numerical changes, resulting in limited optimization effectiveness. This study proposes a novel end-to-end automated method for DFC design. The proposed method harnesses reinforcement learning (RL) algorithms, eliminating the dependence on the design experience of engineers. Thus, it significantly reduces the subjectivity and constraints associated with circuit design. The experimental findings demonstrate clear improvements in design efficiency and quality when comparing the proposed method with traditional engineer-driven methods. Furthermore, the proposed method achieves superior performance when designing complex or rapidly evolving DFCs, highlighting the substantial potential of RL in circuit design automation. In particular, compared to the existing DFC automation design method CircuitGNN, our method achieves an average performance improvement of 8.72%. Additionally, the execution efficiency of our method is 2000 times higher than CircuitGNN on the CPU and 241 times higher on the GPU.
- X. Zhang, M. Jia, L. Chen, J. Ma, and J. Qiu, “Filtered-ofdm-enabler for flexible waveform in the 5th generation cellular networks,” in 2015 IEEE global communications conference (GLOBECOM). IEEE, 2015, pp. 1–6.
- S. Roy and A. Chandra, “Interpolated band-pass method based narrow-band fir filter: A prospective candidate in filtered-ofdm technique for the 5g cellular network,” in TENCON 2019-2019 IEEE Region 10 Conference (TENCON). IEEE, 2019, pp. 311–315.
- J. Lee, M. S. Uhm, and I.-B. Yom, “A dual-passband filter of canonical structure for satellite applications,” IEEE Microwave and wireless components letters, vol. 14, no. 6, pp. 271–273, 2004.
- G. Zhang, H. He, and D. Katabi, “Circuit-gnn: Graph neural networks for distributed circuit design,” in International conference on machine learning. PMLR, 2019, pp. 7364–7373.
- J.-S. Hong and M. J. Lancaster, “Couplings of microstrip square open-loop resonators for cross-coupled planar microwave filters,” IEEE Transactions on Microwave theory and Techniques, vol. 44, no. 11, pp. 2099–2109, 1996.
- D. M. Colleran, C. Portmann, A. Hassibi, C. Crusius, S. S. Mohan, S. Boyd, T. H. Lee, and M. del Mar Hershenson, “Optimization of phase-locked loop circuits via geometric programming,” in Proceedings of the IEEE 2003 Custom Integrated Circuits Conference, 2003. IEEE, 2003, pp. 377–380.
- B. Liu, Y. Wang, Z. Yu, L. Liu, M. Li, Z. Wang, J. Lu, and F. V. Fernández, “Analog circuit optimization system based on hybrid evolutionary algorithms,” Integration, vol. 42, no. 2, pp. 137–148, 2009.
- Y. Wang, M. Orshansky, and C. Caramanis, “Enabling efficient analog synthesis by coupling sparse regression and polynomial optimization,” in Proceedings of the 51st Annual Design Automation Conference, 2014, pp. 1–6.
- T. McConaghy, P. Palmers, M. Steyaert, and G. G. Gielen, “Trustworthy genetic programming-based synthesis of analog circuit topologies using hierarchical domain-specific building blocks,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 4, pp. 557–570, 2011.
- N. Lourenço and N. Horta, “Genom-pof: multi-objective evolutionary synthesis of analog ics with corners validation,” in Proceedings of the 14th annual conference on Genetic and evolutionary computation, 2012, pp. 1119–1126.
- W. Lyu, P. Xue, F. Yang, C. Yan, Z. Hong, X. Zeng, and D. Zhou, “An efficient bayesian optimization approach for automated optimization of analog circuits,” IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 65, no. 6, pp. 1954–1967, 2017.
- W. Lyu, F. Yang, C. Yan, D. Zhou, and X. Zeng, “Batch bayesian optimization via multi-objective acquisition ensemble for automated analog circuit design,” in International conference on machine learning. PMLR, 2018, pp. 3306–3314.
- ——, “Multi-objective bayesian optimization for analog/rf circuit synthesis,” in Proceedings of the 55th Annual Design Automation Conference, 2018, pp. 1–6.
- H. He, G. Zhang, J. Holloway, and D. Katabi, “End-toend learning for distributed circuit design,” in Workshop on ML for Systems at NeurIPS, 2018.
- Y. Cao, G. Wang, and Q.-J. Zhang, “A new training approach for parametric modeling of microwave passive components using combined neural networks and transfer functions,” IEEE Transactions on Microwave Theory and Techniques, vol. 57, no. 11, pp. 2727–2742, 2009.
- F. Feng, C. Zhang, J. Ma, and Q.-J. Zhang, “Parametric modeling of em behavior of microwave components using combined neural networks and pole-residue-based transfer functions,” IEEE Transactions on Microwave Theory and Techniques, vol. 64, no. 1, pp. 60–77, 2015.
- F. Feng, C. Zhang, J. Ma, Q.-J. Zhang et al., “Parametric modeling of microwave components using adjoint neural networks and pole-residue transfer functions with em sensitivity analysis,” IEEE Transactions on Microwave Theory and Techniques, vol. 65, no. 6, pp. 1955–1975, 2017.
- F. Mir, L. Kouhalvandi, and L. Matekovits, “Deep neural learning based optimization for automated high performance antenna designs,” Scientific Reports, vol. 12, no. 1, p. 16801, 2022.
- Z. Li, J. Peng, Y. Mei, S. Lin, Y. Wu, O. Padon, and Z. Jia, “Quarl: A learning-based quantum circuit optimizer,” arXiv preprint arXiv:2307.10120, 2023.
- D. Krylov, P. Khajeh, J. Ouyang, T. Reeves, T. Liu, H. Ajmal, H. Aghasi, and R. Fox, “Learning to design analog circuits to meet threshold specifications,” in International Conference on Machine Learning. PMLR, 2023, pp. 17 858–17 873.
- C. J. Watkins and P. Dayan, “Q-learning,” Machine learning, vol. 8, pp. 279–292, 1992.
- R. S. Sutton, “Generalization in reinforcement learning: Successful examples using sparse coarse coding,” Advances in neural information processing systems, vol. 8, 1995.
- P. Stone, R. S. Sutton, and G. Kuhlmann, “Reinforcement learning for robocup soccer keepaway,” Adaptive Behavior, vol. 13, no. 3, pp. 165–188, 2005.
- L.-J. Lin and T. M. Mitchell, “Reinforcement learning with hidden states,” From animals to animats, vol. 2, pp. 271–280, 1993.
- V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602, 2013.
- H. Van Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with double q-learning,” in Proceedings of the AAAI conference on artificial intelligence, vol. 30, no. 1, 2016.
- Z. Wang, T. Schaul, M. Hessel, H. Hasselt, M. Lanctot, and N. Freitas, “Dueling network architectures for deep reinforcement learning,” in International conference on machine learning. PMLR, 2016, pp. 1995–2003.
- S. Levine, C. Finn, T. Darrell, and P. Abbeel, “End-to-end training of deep visuomotor policies,” The Journal of Machine Learning Research, vol. 17, no. 1, pp. 1334–1373, 2016.
- J. Heinrich and D. Silver, “Deep reinforcement learning from self-play in imperfect-information games,” arXiv preprint arXiv:1603.01121, 2016.
- X. Zhao, L. Zhang, Z. Ding, L. Xia, J. Tang, and D. Yin, “Recommendations with negative feedback via pairwise deep reinforcement learning,” in Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 2018, pp. 1040–1048.
- V. Konda and J. Tsitsiklis, “Actor-critic algorithms,” Advances in neural information processing systems, vol. 12, 1999.
- V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu, “Asynchronous methods for deep reinforcement learning,” in International conference on machine learning. PMLR, 2016, pp. 1928–1937.
- J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz, “Trust region policy optimization,” in International conference on machine learning. PMLR, 2015, pp. 1889–1897.
- J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
- J. Bueno, S. Maktoobi, L. Froehly, I. Fischer, M. Jacquot, L. Larger, and D. Brunner, “Reinforcement learning in a large-scale photonic recurrent neural network,” Optica, vol. 5, no. 6, pp. 756–760, 2018.
- E. Kaufmann, L. Bauersfeld, A. Loquercio, M. Müller, V. Koltun, and D. Scaramuzza, “Champion-level drone racing using deep reinforcement learning,” Nature, vol. 620, no. 7976, pp. 982–987, 2023.
- T. D. Kulkarni, K. Narasimhan, A. Saeedi, and J. Tenenbaum, “Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation,” Advances in neural information processing systems, vol. 29, 2016.
- T. Sugiyama, N. Schweighofer, and J. Izawa, “Reinforcement learning establishes a minimal metacognitive process to monitor and control motor learning performance,” Nature Communications, vol. 14, no. 1, p. 3988, 2023.
- C. Barata, V. Rotemberg, N. C. Codella, P. Tschandl, C. Rinner, B. N. Akay, Z. Apalla, G. Argenziano, A. Halpern, A. Lallas et al., “A reinforcement learning model for ai-based decision support in skin cancer,” Nature Medicine, vol. 29, no. 8, pp. 1941–1946, 2023.
- P. Guo, K. Xiao, Z. Ye, H. Zhu, and W. Zhu, “Intelligent career planning via stochastic subsampling reinforcement learning,” Scientific Reports, vol. 12, no. 1, p. 8332, 2022.
- H. Ju, R. Juan, R. Gomez, K. Nakamura, and G. Li, “Transferring policy of deep reinforcement learning from simulation to reality for robotics,” Nature Machine Intelligence, vol. 4, no. 12, pp. 1077–1087, 2022.
- E. Kuprikov, A. Kokhanovskiy, K. Serebrennikov, and S. Turitsyn, “Deep reinforcement learning for self-tuning laser source of dissipative solitons,” Scientific Reports, vol. 12, no. 1, p. 7185, 2022.
- G. Dulac-Arnold, R. Evans, H. van Hasselt, P. Sunehag, T. Lillicrap, J. Hunt, T. Mann, T. Weber, T. Degris, and B. Coppin, “Deep reinforcement learning in large discrete action spaces,” arXiv preprint arXiv:1512.07679, 2015.
- R. Dadashi, L. Hussenot, D. Vincent, S. Girgin, A. Raichuk, M. Geist, and O. Pietquin, “Continuous control with action quantization from demonstrations,” arXiv preprint arXiv:2110.10149, 2021.
- A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems, vol. 32, 2019.